Unit 5 Compiler PDF
Unit 5 Compiler PDF
Unit 5 Compiler PDF
INTRODUCTION
The code produced by the straight forward compiling algorithms can often be made to run
faster or take less space, or both. This improvement is achieved by program transformations
that are traditionally called optimizations. Compilers that apply code-improving
transformations are called optimizing compilers.
Optimizations are classified into two categories. They are
Machine independent optimizations:
Machine dependant optimizations:
Machine independent optimizations:
Machine independent optimizations are program transformations that improve the target code
without taking into consideration any properties of the target machine.
Machine dependant optimizations are based on register allocation and utilization of special
machine-instruction sequences.
NOTES.PMR-INSIGNIA.ORG
Function-Preserving Transformations
There are a number of ways in which a compiler can improve a program without
changing the function it computes.
The transformations
NOTES.PMR-INSIGNIA.ORG
Frequently, a program will include several calculations of the same value, such as an
offset in an array. Some of the duplicate calculations cannot be avoided by the
programmer because they lie below the level of detail accessible within the source
language.
Copy Propagation:
Assignments of the form f : = g called copy statements, or copies for short. The idea
behind the copy-propagation transformation is to use g for f, whenever possible after the
copy statement f: = g. Copy propagation means use of one variable instead of another.
This may not appear to be an improvement, but as we shall see it gives us an opportunity
to eliminate x.
For example:
x=Pi;
A=x*r*r;
The optimization using copy propagation can be done as follows:
A=Pi*r*r;
Here the variable x is eliminated
Dead-Code Eliminations:
A variable is live at a point in a program if its value can be used subsequently; otherwise,
it is dead at that point. A related idea is dead or useless code, statements that compute
NOTES.PMR-INSIGNIA.ORG
values that never get used. While the programmer is unlikely to introduce any dead code
intentionally, it may appear as the result of previous transformations. An optimization can
be done by eliminating dead code.
Example:
i=0;
if(i=1)
{
a=b+5;
}
Here, if statement is dead code because this condition will never get satisfied.
Constant folding:
We can eliminate both the test and printing from the object code. More generally,
deducing at compile time that the value of an expression is a constant and using the
constant instead is known as constant folding.
One advantage of copy propagation is that it often turns the copy statement into dead
code.
For example,
a=3.14157/2 can be replaced by
a=1.570 there by eliminating a division operation.
Loop Optimizations:
We now give a brief introduction to a very important place for optimizations, namely
loops, especially the inner loops where programs tend to spend the bulk of their time. The
running time of a program may be improved if we decrease the number of instructions in
an inner loop, even if we increase the amount of code outside that loop.
Three techniques are important for loop optimization:
An important modification that decreases the amount of code in a loop is code motion.
This transformation takes an expression that yields the same result independent of the
number of times a loop is executed ( a loop-invariant computation) and places the
expression before the loop. Note that the notion before the loop assumes the existence
of an entry for the loop. For example, evaluation of limit-2 is a loop-invariant
computation in the following while-statement:
while (i <= limit-2)
NOTES.PMR-INSIGNIA.ORG
t= limit-2;
while (i<=t)
Induction Variables :
Loops are usually processed inside out. For example consider the loop around B3.
Note that the values of j and t4 remain in lock-step; every time the value of j decreases by
1, that of t4 decreases by 4 because 4*j is assigned to t 4. Such identifiers are called
induction variables.
When there are two or more induction variables in a loop, it may be possible to get rid of
all but one, by the process of induction-variable elimination. For the inner loop around
B3 in Fig. we cannot get rid of either j or t4 completely; t4 is used in B3 and j in B4.
However, we can illustrate reduction in strength and illustrate a part of the process of
induction-variable elimination. Eventually j will be eliminated when the outer loop of B2
- B5 is considered.
Example:
As the relationship t4:=4*j surely holds after such an assignment to t4 in Fig. and t4 is not
changed elsewhere in the inner loop around B3, it follows that just after the statement
j:=j-1 the relationship t4:= 4*j-4 must hold. We may therefore replace the assignment t 4:=
4*j by t4:= t4-4. The only problem is that t4 does not have a value when we enter block B3
for the first time. Since we must maintain the relationship t 4=4*j on entry to the block B3,
we place an initializations of t4 at the end of the block where j itself is
before
after
NOTES.PMR-INSIGNIA.ORG
Reduction In Strength:
NOTES.PMR-INSIGNIA.ORG
Two statements
t1:=b+c
t2:=x+y
can be interchanged or reordered in its computation in the basic block when value of t 1
does not affect the value of t2.
Algebraic Transformations:
Example:
x:=x+0 can be removed
x:=y**2 can be replaced by a cheaper statement x:=y*y
NOTES.PMR-INSIGNIA.ORG
The compiler writer should examine the language carefully to determine what
rearrangements of computations are permitted, since computer arithmetic does not always
obey the algebraic identities of mathematics. Thus, a compiler may evaluate x*y-x*z as
x*(y-z) but it may not evaluate a+(b-c) as (a+b)-c.
NOTES.PMR-INSIGNIA.ORG
The way of presenting dominator information is in a tree, called the dominator tree in
which the initial node is the root.
The parent of each other node is its immediate dominator.
Each node d dominates only its descendents in the tree.
The existence of dominator tree follows from a property of dominators; each node has a
unique immediate dominator in that is the last dominator of n on any path from the initial
node to n.
In terms of the dom relation, the immediate dominator m has the property is d=!n and d
dom n, then d dom m.
D(1)={1}
D(2)={1,2}
D(3)={1,3}
D(4)={1,3,4}
D(5)={1,3,4,5}
D(6)={1,3,4,6}
D(7)={1,3,4,7}
D(8)={1,3,4,7,8}
D(9)={1,3,4,7,8,9}
D(10)={1,3,4,7,8,10}
NOTES.PMR-INSIGNIA.ORG
Natural Loop:
One application of dominator information is in determining the loops of a flow graph suitable
for improvement.
One way to find all the loops in a flow graph is to search for edges in the flow graph whose
heads dominate their tails. If ab is an edge, b is the head and a is the tail. These types of
edges are called as back edges.
Example:
In the above graph,
74
4 DOM 7
10 7
7 DOM 10
43
83
9 1
NOTES.PMR-INSIGNIA.ORG
loop : = {d};
insert(n);
while stack is not empty do begin
pop m, the first element of stack, off stack;
for each predecessor p of m do insert(p)
end
Inner loop:
If we use the natural loops as the loops, then we have the useful property that unless
two loops have the same header, they are either disjointed or one is entirely contained in
the other. Thus, neglecting loops with the same header for the moment, we have a natural
notion of inner loop: one that contains no other loop.
When two natural loops have the same header, but neither is nested within the other, they
are combined and treated as a single loop.
Pre-Headers:
The pre-header has only the header as successor, and all edges which formerly entered
the header of L from outside L instead enter the pre-header.
Initially the pre-header is empty, but transformations on L may place statements in it.
header
pre-header
loop L
header
loop L
(a) Before
(b) After
Reducible flow graphs are special flow graphs, for which several code optimization
transformations are especially easy to perform, loops are unambiguously defined,
dominators can be easily calculated, data flow analysis problems can also be solved
efficiently.
NOTES.PMR-INSIGNIA.ORG
The most important properties of reducible flow graphs are that there are no jumps into
the middle of loops from outside; the only entry to a loop is through its header.
Definition:
A flow graph G is reducible if and only if we can partition the edges into two disjoint
groups, forward edges and back edges, with the following properties.
The forward edges from an acyclic graph in which every node can be reached from initial
node of G.
The back edges consist only of edges where heads dominate theirs tails.
Example: The above flow graph is reducible.
If we know the relation DOM for a flow graph, we can find and remove all the back
edges.
If the forward edges form an acyclic graph, then we can say the flow graph reducible.
In the above example remove the five back edges 43, 74, 83, 91 and 107
whose heads dominate their tails, the remaining graph is acyclic.
The key property of reducible flow graphs for loop analysis is that in such flow graphs
every set of nodes that we would informally regard as a loop must contain a back edge.
PEEPHOLE OPTIMIZATION
A statement-by-statement code-generations strategy often produce target code that
contains redundant instructions and suboptimal constructs .The quality of such target
code can be improved by applying optimizing transformations to the target program.
A simple but effective technique for improving the target code is peephole optimization,
a method for trying to improving the performance of the target program by examining a
short sequence of target instructions (called the peephole) and replacing these
instructions by a shorter or faster sequence, whenever possible.
The peephole is a small, moving window on the target program. The code in the peephole
need not contiguous, although some implementations do require this.it is characteristic of
peephole optimization that each improvement may spawn opportunities for additional
improvements.
We shall give the following examples of program transformations that are characteristic
of peephole optimizations:
Redundant-instructions elimination
Flow-of-control optimizations
Algebraic simplifications
Use of machine idioms
Unreachable Code
NOTES.PMR-INSIGNIA.ORG
(a)
One obvious peephole optimization is to eliminate jumps over jumps .Thus no matter what
the value of debug; (a) can be replaced by:
If debug 1 goto L2
Print debugging information
L2:
(b)
As the argument of the statement of (b) evaluates to a constant true it can be replaced
by
NOTES.PMR-INSIGNIA.ORG
If debug 0 goto L2
Print debugging information
L2:
(c)
As the argument of the first statement of (c) evaluates to a constant true, it can be replaced by
goto L2. Then all the statement that print debugging aids are manifestly unreachable and
can be eliminated one at a time.
Flows-Of-Control Optimizations:
The unnecessary jumps can be eliminated in either the intermediate code or the target code
by the following types of peephole optimizations. We can replace the jump sequence
goto L1
.
L1: gotoL2
by the sequence
goto L2
.
L1: goto L2
If there are now no jumps to L1, then it may be possible to eliminate the statement L1:goto
L2 provided it is preceded by an unconditional jump .Similarly, the sequence
if a < b goto L1
.
L1: goto L2
can be replaced by
If a < b goto L2
.
L1: goto L2
Finally, suppose there is only one jump to L1 and L1 is preceded by an unconditional goto.
Then the sequence
goto L1
..
NOTES.PMR-INSIGNIA.ORG
..(1)
May be replaced by
If a < b goto L2
goto L3
.
L3:
.(2)
While the number of instructions in (1) and (2) is the same, we sometimes skip the
unconditional jump in (2), but never in (1).Thus (2) is superior to (1) in execution time
Algebraic Simplification:
There is no end to the amount of algebraic simplification that can be attempted through
peephole optimization. Only a few algebraic identities occur frequently enough that it is
worth considering implementing them .For example, statements such as
x := x+0
Or
x := x * 1
Are often produced by straightforward intermediate code-generation algorithms, and they can
be eliminated easily through peephole optimization.
Reduction in Strength:
Reduction in strength replaces expensive operations by equivalent cheaper ones on the target
machine. Certain machine instructions are considerably cheaper than others and can often be
used as special cases of more expensive operators.
For example, x is invariably cheaper to implement as x*x than as a call to an exponentiation
routine. Fixed-point multiplication or division by a power of two is cheaper to implement as
a shift. Floating-point division by a constant can be implemented as multiplication by a
constant, which may be cheaper.
X2 X*X
The target machine may have hardware instructions to implement certain specific operations
efficiently. For example, some machines have auto-increment and auto-decrement addressing
modes. These add or subtract one from an operand before or after using its value.
The use of these modes greatly improves the quality of code when pushing or popping a
stack, as in parameter passing. These modes can also be used in code for statements like i :
=i+1.
NOTES.PMR-INSIGNIA.ORG
i:=i+1 i++
i:=i-1 i- -
In order to do code optimization and a good job of code generation , compiler needs to
collect information about the program as a whole and to distribute this information to
each block in the flow graph.
The details of how data-flow equations are set and solved depend on three factors.
The notions of generating and killing depend on the desired information, i.e., on the data
flow analysis problem to be solved. Moreover, for some problems, instead of proceeding
along with flow of control and defining out[s] in terms of in[s], we need to proceed
backwards and define in[s] in terms of out[s].
Since data flows along control paths, data-flow analysis is affected by the constructs in a
program. In fact, when we write out[s] we implicitly assume that there is unique end
point where control leaves the statement; in general, equations are set up at the level of
basic blocks rather than statements, because blocks do have unique end points.
There are subtleties that go along with such statements as procedure calls, assignments
through pointer variables, and even assignments to array variables.
Points and Paths:
Within a basic block, we talk of the point between two adjacent statements, as well as the
point before the first statement and after the last. Thus, block B1 has four points: one
before any of the assignments and one after each of the three assignments.
NOTES.PMR-INSIGNIA.ORG
B1
d1 : i :=m-1
d2: j :=n
d3: a := u1
B2
d4 : I := i+1
B3
d5: j := j-1
B4
B5
B6
d6 :a :=u2
Now let us take a global view and consider all the points in all the blocks. A path from p 1
to pn is a sequence of points p1, p2,.,pn such that for each i between 1 and n-1, either
Pi is the point immediately preceding a statement and p i+1 is the point immediately
following that statement in the same block, or
Pi is the end of some block and pi+1 is the beginning of a successor block.
Reaching definitions:
These statements certainly define a value for x, and they are referred to as unambiguous
definitions of x. There are certain kinds of statements that may define a value for x; they
are called ambiguous definitions. The most usual forms of ambiguous definitions of x
are:
We say a definition d reaches a point p if there is a path from the point immediately
following d to p, such that d is not killed along that path. Thus a point can be reached
NOTES.PMR-INSIGNIA.ORG
Flow graphs for control flow constructs such as do-while statements have a useful
property: there is a single beginning point at which control enters and a single end point
that control leaves from when execution of the statement is over. We exploit this property
when we talk of the definitions reaching the beginning and the end of statements with the
following syntax.
S
id + id| id
Expressions in this language are similar to those in the intermediate code, but the flow
graphs for statements have restricted forms.
S1
S1
S2
If E goto s1
S1
S2
If E goto s1
S1 ; S2
IF E then S1 else S2
do S1 while E
We define a portion of a flow graph called a region to be a set of nodes N that includes a
header, which dominates all other nodes in the region. All edges between nodes in N are
in the region, except for some that enter the header.
The portion of flow graph corresponding to a statement S is a region that obeys the
further restriction that control can flow to just one outside block when it leaves the
region.
NOTES.PMR-INSIGNIA.ORG
We say that the beginning points of the dummy blocks at the entry and exit of a
statements region are the beginning and end points, respectively, of the statement. The
equations are inductive, or syntax-directed, definition of the sets in[S], out[S], gen[S],
and kill[S] for all statements S.
gen[S] is the set of definitions generated by S while kill[S] is the set of definitions
that never reach the end of S.
Consider the following data-flow equations for reaching definitions :
i)
d: a:=b+c
gen [S] = { d }
kill [S] = Da { d }
out [S] = gen [S] U ( in[S] kill[S] )
Observe the rules for a single assignment of variable a. Surely that assignment is a
definition of a, say d. Thus
Gen[S]={d}
ii )
S1
S2
gen[S]=gen[S2] U (gen[S1]-kill[S2])
Kill[S] = kill[S2] U (kill[S1] gen[S2])
in [S1] = in [S]
in [S2] = out [S1]
out [S] = out [S2]
NOTES.PMR-INSIGNIA.ORG
There is a subtle miscalculation in the rules for gen and kill. We have made the
assumption that the conditional expression E in the if and do statements are
uninterpreted; that is, there exists inputs to the program that make their branches go
either way.
We assume that any graph-theoretic path in the flow graph is also an execution path, i.e.,
a path that is executed when the program is run with least one possible input.
When we compare the computed gen with the true gen we discover that the true gen is
always a subset of the computed gen. on the other hand, the true kill is always a superset
of the computed kill.
These containments hold even after we consider the other rules. It is natural to wonder
whether these differences between the true and computed gen and kill sets present a
serious obstacle to data-flow analysis. The answer lies in the use intended for these data.
Overestimating the set of definitions reaching a point does not seem serious; it merely
stops us from doing an optimization that we could legitimately do. On the other hand,
underestimating the set of definitions is a fatal error; it could lead us into making a
change in the program that changes what the program computes. For the case of reaching
definitions, then, we call a set of definitions safe or conservative if the estimate is a
superset of the true set of reaching definitions. We call the estimate unsafe, if it is not
necessarily a superset of the truth.
Returning now to the implications of safety on the estimation of gen and kill for reaching
definitions, note that our discrepancies, supersets for gen and subsets for kill are both in
the safe direction. Intuitively, increasing gen adds to the set of definitions that can reach a
point, and cannot prevent a definition from reaching a place that it truly reached.
Decreasing kill can only increase the set of definitions reaching any given point.
NOTES.PMR-INSIGNIA.ORG
Many data-flow problems can be solved by synthesized translations similar to those used
to compute gen and kill. It can be used, for example, to determine loop-invariant
computations.
However, there are other kinds of data-flow information, such as the reaching-definitions
problem. It turns out that in is an inherited attribute, and out is a synthesized attribute
depending on in. we intend that in[S] be the set of definitions reaching the beginning of
S, taking into account the flow of control throughout the entire program, including
statements outside of S or within which S is nested.
The set out[S] is defined similarly for the end of s. it is important to note the distinction
between out[S] and gen[S]. The latter is the set of definitions that reach the end of S
without following paths outside S.
Considering if-statement we have conservatively assumed that control can follow either
branch, a definition reaches the beginning of S1 or S2 exactly when it reaches the
beginning of S.
In[S1] = in[S2] = in[S]
If a definition reaches the end of S if and only if it reaches the end of one or both sub
statements; i.e,
Out[S]=out[S1] U out[S2]
Representation of sets:
Sets of definitions, such as gen[S] and kill[S], can be represented compactly using bit
vectors. We assign a number to each definition of interest in the flow graph. Then bit
vector representing a set of definitions will have 1 in position I if and only if the
definition numbered I is in the set.
The number of definition statement can be taken as the index of statement in an array
holding pointers to statements. However, not all definitions may be of interest during
global data-flow analysis. Therefore the number of definitions of interest will typically be
recorded in a separate table.
A bit vector representation for sets also allows set operations to be implemented
efficiently. The union and intersection of two sets can be implemented by logical or and
logical and, respectively, basic operations in most systems-oriented programming
NOTES.PMR-INSIGNIA.ORG
languages. The difference A-B of sets A and B can be implemented by taking the
complement of B and then using logical and to compute A
.
Local reaching definitions:
Space for data-flow information can be traded for time, by saving information only at
certain points and, as needed, recomputing information at intervening points. Basic
blocks are usually treated as a unit during global flow analysis, with attention restricted to
only those points that are the beginnings of blocks.
Since there are usually many more points than blocks, restricting our effort to blocks is a
significant savings. When needed, the reaching definitions for all points in a block can be
calculated from the reaching definitions for the beginning of a block.
Use-definition chains:
Evaluation order:
The techniques for conserving space during attribute evaluation, also apply to the
computation of data-flow information using specifications. Specifically, the only
constraint on the evaluation order for the gen, kill, in and out sets for statements is that
imposed by dependencies between these sets. Having chosen an evaluation order, we are
free to release the space for a set after all uses of it have occurred.
Earlier circular dependencies between attributes were not allowed, but we have seen that
data-flow equations may have circular dependencies.
Data-flow analysis must take all control paths into account. If the control paths are
evident from the syntax, then data-flow equations can be set up and solved in a syntaxdirected manner.
When programs can contain goto statements or even the more disciplined break and
continue statements, the approach we have taken must be modified to take the actual
control paths into account.
Several approaches may be taken. The iterative method works arbitrary flow graphs.
Since the flow graphs obtained in the presence of break and continue statements are
reducible, such constraints can be handled systematically using the interval-based
methods
NOTES.PMR-INSIGNIA.ORG
However, the syntax-directed approach need not be abandoned when break and continue
statements are allowed.
Global transformations are not substitute for local transformations; both must be performed.
The available expressions data-flow problem discussed in the last section allows us to
determine if an expression at point p in a flow graph is a common sub-expression. The
following algorithm formalizes the intuitive ideas presented for eliminating common subexpressions.
beginning of block and neither y nor r z is defined prior to statement s in that block,
do the following.
To discover the evaluations of y+z that reach ss block, we follow flow graph
edges, searching backward from ss block. However, we do not go through
any block that evaluates y+z. The last evaluation of y+z in each block
encountered is an evaluation of y+z that reaches s.
Create new variable u.
Replace each statement w: =y+z found in (1) by
u:=y+z
w:=u
Replace statement s by x:=u.
The search in step(1) of the algorithm for the evaluations of y+z that reach statement s
can also be formulated as a data-flow analysis problem. However, it does not make sense
to solve it for all expressions y+z and all statements or blocks because too much
irrelevant information is gathered.
NOTES.PMR-INSIGNIA.ORG
Not all changes made by algorithm are improvements. We might wish to limit the
number of different evaluations reaching s found in step (1), probably to one.
Algorithm will miss the fact that a*z and c*z must have the same value in
a :=x+y
c :=x+y
vs
b :=a*z
d :=c*z
Because this simple approach to common sub expressions considers only the literal
expressions themselves, rather than the values computed by expressions.
Copy propagation:
Various algorithms introduce copy statements such as x :=copies may also be generated
directly by the intermediate code generator, although most of these involve temporaries
local to one block and can be removed by the dag construction. We may substitute y for x
in all these places, provided the following conditions are met every such use u of x.
On every path from s to including paths that go through u several times, there are no
assignments to y.
Condition (1) can be checked using ud-changing information. We shall set up a new dataflow analysis problem in which in[B] is the set of copies s: x:=y such that every path
from initial node to the beginning of B contains the statement s, and subsequent to the
last occurrence of s, there are no assignments to y.
NOTES.PMR-INSIGNIA.ORG
If s meets the conditions of (2), then remove s and replace all uses of x found in (1)
by y.
Detection of loop-invariant computations:
Ud-chains can be used to detect those computations in a loop that are loop-invariant, that
is, whose value does not change as long as control stays within the loop. Loop is a region
consisting of set of blocks with a header that dominates all the other blocks, so the only
way to enter the loop is through the header.
Having found the invariant statements within a loop, we can apply to some of them an
optimization known as code motion, in which the statements are moved to pre-header of
the loop. The following three conditions ensure that code motion does not change what
the program computes. Consider s: x: =y+z.
The block containing s dominates all exit nodes of the loop, where an exit of a loop is a
node with a successor not in the loop.
There is no other statement in the loop that assigns to x. Again, if x is a temporary
assigned only once, this condition is surely satisfied and need not be changed.
NOTES.PMR-INSIGNIA.ORG
No use of x in the loop is reached by any definition of x other than s. This condition too
will be satisfied, normally, if x is temporary.
To understand why no change to what the program computes can occur, condition (2i)
and (2ii) of this algorithm assure that the value of x computed at s must be the value of x
after any exit block of L. When we move s to a pre-header, s will still be the definition of
x that reaches the end of any exit block of L. Condition (2iii) assures that any uses of x
within L did, and will continue to, use the value of x computed by s.
The condition (1) can be relaxed if we are willing to take the risk that we may actually
increase the running time of the program a bit; of course, we never change what the
program computes. The relaxed version of code motion condition (1) is that we may
move a statement s assigning x only if:
1. The block containing s either dominates all exists of the loop, or x is not used outside
the loop. For example, if x is a temporary variable, we can be sure that the value will
be used only in its own block.
If code motion algorithm is modified to use condition (1), occasionally the running time
will increase, but we can expect to do reasonably well on the average. The modified
algorithm may move to pre-header certain computations that may not be executed in the
NOTES.PMR-INSIGNIA.ORG
loop. Not only does this risk slowing down the program significantly, it may also cause
an error in certain circumstances.
Even if none of the conditions of (2i), (2ii), (2iii) of code motion algorithm are met by an
assignment x: =y+z, we can still take the computation y+z outside a loop. Create a new
temporary t, and set t: =y+z in the pre-header. Then replace x: =y+z by x: =t in the loop.
In many cases we can propagate out the copy statement x: = t.
Definitions of variables used by s are either outside L, in which case they reach the preheader, or they are inside L, in which case by step (3) they were moved to pre-header
ahead of s.
We put the pointer on each ud-chain containing s. Then, no matter where we move s, we
have only to change ps , regardless of how many ud-chains s is on.
The dominator information is changed slightly by code motion. The pre-header is now
the immediate dominator of the header, and the immediate dominator of the pre-header is
the node that formerly was the immediate dominator of the header. That is, the pre-header
is inserted into the dominator tree as the parent of the header.
NOTES.PMR-INSIGNIA.ORG
/* omit if d is 0 */
if j relop r goto B
where, r is a new temporary. The case if x relop i goto B is handled
analogously. If there are two induction variables i1 and i2 in the test if i1 relop i2
goto B, then we check if both i1 and i2 can be replaced. The easy case is when we
have j1 with triple and j2 with triple, and c1=c2 and d1=d2. Then, i1 relop i2 is
equivalent to j1 relop j2.
Now, consider each induction variable j for which a statement j: =s was
introduced. First check that there can be no assignment to s between the
introduced statement j :=s and any use of j. In the usual situation, j is used in the
block in which it is defined, simplifying this check; otherwise, reaching
definitions information, plus some graph analysis is needed to implement the
check. Then replace all uses of j by uses of s and delete statement j: =s.
NOTES.PMR-INSIGNIA.ORG