Compiler Key1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

NOV/DEC-'08/CS1352-Answer Key

6. List out the three functions that are used to manipulate the list of labels in back
patching.
· mklist(i) creates the new list. The index i is passed as an argument to this
function where I is an index to the array of quadruple.
· merge_list(p1,p2) this function concatenates two lists pointed by p1 and p2. It
returns the pointer to the concatenated list.
· backpatch(p,i) inserts i as target label for the statement pointed by pointer p.

7. What is constant folding?


Deducing at compile time that the value of an expression is a constant and using
the constant instead of the expression is called constant folding.

8. What is DAG? What are the applications of DAG in compiler?


Directed acyclic graph (DAG) is a useful data structure for implementing
transformations on basic blocks. A DAG for the basic block has the following
information:
 A label for each node. For leaves, the label is an identifier and for interior
nodes, an operator symbol.
 For each node, a possibly empty list of attached identifiers
Applications:
· Determining the common sub-expressions.
· Determining which identifiers have their values used in the block
· Determining which statements of the block compute value outside the block.

9. What is a flow graph?


A flow graph is a directed graph in which the flow control information is added to
the basic blocks.
· The nodes in the flow graph are basic blocks
· the block whose leader is the first statement is called initial block.
· There is a directed edge from block B1 to block B2 if B2 immediately follows B1 in
the some execution sequence. We can say that B1 is a predecessor of B2 and B2 is a
successor of B1.

10. What are the criteria used for code-improving transformations?


 Must preserve the meaning of programs
 Must, on the average, speed up programs by a measurable amount
 Must be worth the effort.

PART – B

11. a. i. Discuss about the input buffering scheme in lexical analyzer. (6)
Determining the next lexeme requires reading the input beyond the end of the
lexeme.
Buffer Pairs: (2)
Concerns with efficiency issues
Used with a look ahead on the input

-2-
NOV/DEC-'08/CS1352-Answer Key

It is a specialized buffering technique used to reduce the overhead required to process


an input character. Buffer is divided into two N-character halves. Use two pointers. Used
at times when the lexical analyzer needs to look ahead several characters beyond the
lexeme for a pattern before a match is announced. One pointer called forward pointer,
points to first character of the next lexeme found. The string of characters between two
forms the lexeme.
Increment procedure for forward pointer: (2)
If forward at end of first half then
reload second half
forward+=1
else if forward at end of second half
reload the first half
move forward to beginning of first half
else
forward+=1
Sentinels: (2)
It is the special character which cannot be a part of source program. It is used
to reduce the two tests into one. e.g. eof
Increment procedure for forward pointer using sentinels:
forward+=1
if forward ↑=eof then
If forward at end of first half then
reload second half
forward+=1
else if forward at end of second half
reload the first half
move forward to beginning of first half
else
terminate lexical analysis

ii. Construct a NFA using Thompson’s construction algorithm for the regular
expression (a|b)*abb(a|b)* and convert it into DFA. (10)
NFA:

-3-
NOV/DEC-'08/CS1352-Answer Key

The start state of DFA is å-closure (0)


å-closure (0) = {0, 1, 2, 4, 7} = A
The input symbol alphabet is {a, b}
å-closure (move(A, a)) = å-closure (move({0,1, 2, 4, 7}, a))
= å-closure (3, 8) = {1, 2, 3, 4, 6, 7, 8} = B
DTrans [A, a] = B
å-closure (move(A, b)) = å-closure (move({0,1, 2, 4, 7}, b))
= å-closure (5) = {1, 2, 4, 5, 6, 7} = C
DTrans [A, b] = C
å-closure (move(B, a)) = å-closure (move({1, 2, 3, 4, 6, 7, 8}, a))
= å-closure (3, 8) = {1, 2, 3, 4, 6, 7, 8} = B
DTrans [B, a] = B
å-closure (move(B, b)) = å-closure (move({1, 2, 3, 4, 6, 7, 8}, b))
= å-closure (5, 9) = {1, 2, 4, 5, 6, 7, 9} = C
DTrans [B, b] = D
å-closure (move(C, a)) = å-closure (move({1, 2, 4, 5, 6, 7, 8}, a))
= å-closure (3, 8) = B
DTrans [C, a] = B
å-closure (move(C, b)) = å-closure (move({1, 2, 4, 5, 6, 7, 8}, b))
= å-closure (5) = C
DTrans [C, b] = C
å-closure (move(D, a)) = å-closure (move({1, 2, 4, 5, 6, 7, 9}, a))
= å-closure (3, 8) = B
DTrans [D, a] = B
å-closure (move(D, b)) = å-closure (move({1, 2, 4, 5, 6, 7, 9}, b))
= å-closure (5, 10) = {1, 2, 4, 5, 6, 7, 10, 11, 12, 14, 17} = E
DTrans [D, b] = E
å-closure (move(E, a)) = å-closure (move({1, 2, 4, 5, 6, 7, 10, 11, 12, 14, 17}, a))
= å-closure (3, 8, 13) = {1, 2, 3, 4, 6, 7, 8, 11, 12, 13, 14, 16, 17} = F
DTrans [E, a] = F
å-closure (move(E, b)) = å-closure (move({1, 2, 4, 5, 6, 7, 10, 11, 12, 14, 17}, b))
= å-closure (5, 15) = {1, 2, 4, 5, 6, 7, 11, 12, 14, 15, 16, 17} = G
DTrans [E, b] = G
å-closure (move(F, a)) =å-closure (3, 8, 13) = F
DTrans [F, a] = F
å-closure (move(F, b)) =å-closure (5, 9, 15)
={1, 2, 4, 5, 6, 7, 9, 11, 12, 14, 15, 16, 17}= H
DTrans [F, b] = H
å-closure (move(G, a)) =å-closure (3, 8, 13) = F
DTrans [G, a] = F
å-closure (move(G, b)) =å-closure (5, 15) = G
DTrans [G, b] = G
å-closure (move(H, a)) =å-closure (3, 8, 13) = F
DTrans [H, a] = F
å-closure (move(H, b)) =å-closure (5, 10, 15)
={1, 2, 4, 5, 6, 7, 10, 11, 12, 14, 15, 16, 17} = I

-4-
NOV/DEC-'08/CS1352-Answer Key

DTrans [H, b] = I
å-closure (move(I, a)) =å-closure (3, 8, 13) = F
DTrans [I, a] = F
å-closure (move(I, b)) =å-closure (5, 15) = G
DTrans [I, b] = G

Transition table for DFA:

Input symbol
State
a b
A B C
B B D
C B C
D B E
E * F G
F * F H
G * F G
H * F I
I * F G

Transition diagram for DFA:

(OR)

-5-
NOV/DEC-'08/CS1352-Answer Key

b. i. Illustrate the compiler’s internal representation of the changes in the source


program, as translation progresses by considering the translation of the
statement A:=B+C*50. (8)
The process of compilation is very complex. So it comes out to be customary
from the logical as well as implementation point of view to partition the compilation
process into several phases. A phase is a logically cohesive operation that takes as
input one representation of source program and produces as output another
representation. (2)

Source program is a stream of characters: E.g. a =b + c * 50 (4)


– lexical analysis: groups characters into non-separable units, called token, and
generates token stream: id1 = id2 + id3 * const
• The information about the identifiers must be stored somewhere
(symbol table).
– Syntax analysis: checks whether the token stream meets the grammatical
specification of the language and generates the syntax tree.
– Semantic analysis: checks whether the program has a meaning (e.g. if a is a
record and b and c are integers then the assignment does not make a sense).
:=
:=

id1 +
id1 +
id2 *

id2 *
id3 inttoreal

id3 50 50
Syntax analysis Semantic analysis
– Intermediate code generation, intermediate code is something that is both close
to the final machine code and easy to manipulate (for optimization). One example
is the three-address code:
dst = op1 op op2
• The three-address code for the assignment statement:
temp1 = inttoreal(50);
temp2 = id3 * temp1;
temp3 = id2 + temp2;
id1 = temp3
– Code optimization: produces better/semantically equivalent code.
temp1 = id3 * 50.0
id1 = id2 + temp1
– Code generation: generates assembly
MOVF id3, R2
MULF #50.0, R2
MOVF id2, R1
ADDF R2, R1
MOVF R1, id1

-6-
NOV/DEC-'08/CS1352-Answer Key

 Symbol Table Creation / Maintenance


 Contains Info (storage, type, scope, args) on Each “Meaningful” Token,
typically Identifiers
 Data Structure Created / Initialized During Lexical Analysis
 Utilized / Updated During Later Analysis & Synthesis
 Error Handling
 Detection of Different Errors Which Correspond to All Phases
 Each phase should know somehow to deal with error, so that compilation
can proceed, to allow further errors to be detected
Source Program

1
Lexical Analyzer

2
Syntax Analyzer

3
Semantic Analyzer

Symbol-table Error Handler


Manager
4 Intermediate Code
Generator

5
Code Optimizer

6
Code Generator

Target Program
(2)

ii. Construct a DFA directly from the regular expression (a|b)*abb without
constructing NFA. (8)
Syntax tree for (a|b)*abb#:

-7-
NOV/DEC-'08/CS1352-Answer Key

Calculation of firstpos, lastpos and nullable for nodes in syntax tree:

Calculation of followpos:

Node followpos
1 {1, 2, 3}
2 {1, 2, 3}
3 {4}
4 {5}
5 {6}
6 -

Now, the start state of DFA is firstpos of the root


So, A= {1, 2, 3}
Consider the input symbol ‘a’:
Position 1 and 3 are for ‘a’ in A
So, let B = followpos(1) U followpos(3)
= {1, 2, 3} U {4} = {1, 2, 3, 4}
DTrans[A, a] = B
Consider the input symbol ‘b’:
Position 2 is for ‘b’ in A
So, let B = followpos(2)
= {1, 2, 3} = A
DTrans[A, b] = A

-8-
NOV/DEC-'08/CS1352-Answer Key

Now continue with B,


Consider the input symbol ‘a’:
Position 1 and 3 are for ‘a’ in A
So, followpos(1) U followpos(3)
= {1, 2, 3} U {4} = {1, 2, 3, 4} = B
DTrans[B, a] = B
Consider the input symbol ‘b’:
Position 2 and 4 are for ‘b’ in B
So, followpos(2) U followpos(4)
= {1, 2, 3, 4, 5} = C
DTrans[B, b] = C

Now continue with C,


Consider the input symbol ‘a’:
Position 1 and 3 are for ‘a’ in A
So, followpos(1) U followpos(3)
= {1, 2, 3} U {4} = {1, 2, 3, 4} = B
DTrans[C, a] = B
Consider the input symbol ‘b’:
Position 2 and 5 are for ‘b’ in C
So, followpos(2) U followpos(5)
= {1, 2, 3, 6} = D
DTrans[C, b] = D

Now continue with D,


Consider the input symbol ‘a’:
Position 1 and 3 are for ‘a’ in D
So, followpos(1) U followpos(3)
= {1, 2, 3} U {4} = {1, 2, 3, 4} = B
DTrans[D, a] = B
Consider the input symbol ‘b’:
Position 2 is for ‘b’ in D
So, followpos(2) = {1, 2, 3} = A
DTrans[D, b] = A

The position associated with the end marker #, 6 is in D. So, D is the final state.
DFA
a a
b
a b b
A B C D
a

-9-
NOV/DEC-'08/CS1352-Answer Key

12. a. i. Give the definitions for FIST(X) and FOLLOW (A) procedures used in
construction of predictive parser. (4)
First(X): (Apply the following rules until no more terminals or å can be added to any
first set). (2)
 If X is a terminal, then first(X) is {X}
 If X-> å is a production, then add å to First(X)
 If X is a nonterminal and X->Y1 Y2 .. Yk is a production, then place a in
First(X) if for some I, a is in First(Yi) and å is in all of First(Y1)…First(Yi-1)

Follow(X): (Apply the following rules until nothing can be added to any Follow set). (2)
 Place $ in Follow(S)
 If there is a production A->áBâ, then everything in First(â) except for å is place in
Follow(B)
 If there is a production A->áB, or a production A->áBâ where First(â) contains å,
then everything in Follow(A) is in Follow(B)

ii. What is an operator grammar? Draw the precedence graph for the following
table: (12)
a ( ) , $
a > > >
( < < = <
) > > >
, < > > >
$ < <

Operator grammar:
A grammar with the property that no production has å on the right side or has two
adjacent non terminals is called operator grammar.

Construction of precedence graph and functions:


 Create symbols fa and ga for each ‘a’ i.e. terminal
 Partition the created symbols into groups
f( and g) are in same group because of = relation
 Create a directed graph: if fid > g+, place an edge from fid to g+

- 10 -
NOV/DEC-'08/CS1352-Answer Key

 There are no cycles and so precedence function exists


i.e. f(a) is the length of the longest path beginning at the group of fa
a ( ) , $
f 2 0 2 2 0
g 3 3 0 1 0
(OR)
b. i. Write a note on error recovery in predictive parsing. (4)
An error is detected during predictive parsing, when the terminal on the top of the
stack does not match the next input symbol or when non terminal A is on the top of the
stack, a is the next input symbol, and the parsing table entry M [A, a] is empty.
Panic mode error recovery is based on the idea of skipping symbols on the input until a
token in a selected set of synchronizing tokens appears. Heuristics for choosing
synchronizing token set:
 Place all the symbols in Follow(A) into synchronizing set for A
 Add to the synchronizing set of a lower construct the symbol that begin higher
constructs
 Place all the symbols in First(A) into synchronizing set for A
 If a non terminal can generate the empty string, then the production deriving å
can be used as default.
 If the terminal on top of stack does not match, just pop the terminal and issue
a message saying that the terminal was inserted and continue parsing.
Phrase-level recovery is implemented by filling in the blank entries in the predictive
parsing table with pointers to error routines. These routines may insert, change or delete
symbols on the input and issue appropriate error messages.

ii. Write the LR parsing algorithm. Check whether the grammar is SLR (1) or not.
Justify the answer with reasons. (12)
S->L=R | R; L->*R | id; R->L
LR parsing algorithm:
set ip to point to the first symbol of w$
repeat
let s be the state on the top of the stack and a the symbol pointed to by ip
if action[s, a]=shift s’ then
push a then s’ on top of the stack
advance ip to the next input symbol
else if action[s, a]=reduce A-> â then
pop 2*| â| symbols off the stack
let s’ be the state now on top of the stack
push A then goto[s’, A] on top of the stack
output the production A-> â
else if action[s, a]=accept then
return
else
error()
end

- 11 -
NOV/DEC-'08/CS1352-Answer Key

Canonical collection of sets of LR (0) items for the above grammar is


I0: S->.S L->.*R
S->.L=R L->.id
S->.R
L->.*R I5: L->id.
L->.id
R->.L I6: S->L=.R
R->.L
I1: S’->S. L->.*R
L->.id
I2: S->L.=R
R->L. I7: L->*R.

I3: S->R. I8: R->L.

I4: L->*.R I9: S->L=R.


R->.L

Consider the set of Items I2. The first item in this set makes action [2, =] be shift 6.
Since Follow(R) has =, the second item sets action [2, =] to reduce R->L. Thus the entry
action [2, =] is multiply defined. So it is not SLR (1)

13. a. i. What are the various data structure used for symbol table construction and
explain any one in detail. (8)
A compiler uses symbol table to keep track of scope and binding information about
names. It is searched every time a name is encountered in the source text. Changes to the
table occur if a new name or new information about existing name is discovered. (2)
Data Structures: (2)
Linear lists and hash tables
Hash tables: (4)
Open hashing is used. Open refers to the property that there need be no limit on
the number of entries that can be made in the table. There are two parts to the data
structure:
 A hash table consisting of a fixed array of m pointers to table entries
 Table entries organized into m separate linked lists, called buckets.
To determine whether there is an entry for string s in the symbol table, apply a hash
function h to s, such that h(s) returns an integer between 0 and m-1. If s is in the symbol
table, then it is on the list numbered h(s). If s is not yet in the symbol table, it is entered
by creating a record for s that is linked at the front of list numbered h(s).

ii. Let A be a 10 * 20 array with low1 = low2 =1. Let w=4. Draw an annotated parse
tree for the assignment statement X:=A[y,z]. Give the sequence of three address
statement generated. (8)
In case of a two-dimensional array in row major form (row-by-row), the relative
address of A[i1,i2] can be calculated by the formula
Base +((i1-low1)*n2+i2-low2)*ù

- 12 -
NOV/DEC-'08/CS1352-Answer Key

Where n2= high2-low2+1, low1 and low2 are the lower bounds on the values of i1 and
i2, n2 is the number of values that i1 can take.
Assuming that i1 and i2 are the only values that are not known at compile time,
we can rewrite the above expression as
((i1*n2)+i2)* ù+ (base-((low1*n2)+low2)* ù)
=> ((i1*n2)+i2)* ù + constant, c
Where c can be determined at compile time
Here, n1=10, n2=20, i1=y, i2=z, ù=4, low1=low2=1
c=base-((low1*n2)+low2)* ù
=base-((1*20)+1)*4 =base-84
Expression: (y*20+z)*4
3AC to access A[y,z] is
t1:=y*20
t1:=t1+z
t2:=4*t1
t3:=c //c=base – 84
t4:=t3[t2]
x:=t4
(OR)

b. How would you generate the intermediate code for the flow of control statements?
Explain with examples.
Flow of control statements:
S-> if E then S1 | if E then S1 else S2 | while E do S1
Semantic rules for if E then S1:
E.true:= newlabel;
E.false:=S.next;
S1.next:=S.next;
S.code:=E.code || gen(E.true “:”) || S1.code
Semantic rules for if E then S1 else S2:
E.true:= newlabel;
E.false:=newlabel;
S1.next:=S.next;
S2.next:=S.next;
S.code:=E.code || gen(E.true “:”) || S1.code || gen(‘goto’ S.next) ||
gen(E.false “:”) || S2.code
Example:
Statement: a and b and c
if a were false, then we need not evaluate the rest of the expressions. So, we insert
labels E.true and E.false in the appropriate places.
if a goto E.true
goto E.false
E.true: if b goto E1.true
goto E.false
E1.true: if c goto E2.true
goto E.false

- 13 -
NOV/DEC-'08/CS1352-Answer Key

E2.true : exp =1
E.false: exp =0
Semantic rules for while E do S1:
S.begin:=newlabel
E.true:= newlabel;
E.false:=S.next;
S1.next:=S.begin;
S.code:=gen(S.begin’:’) || E.code || gen(E.true “:”) || S1.code || gen(‘goto’ S.begin)
Example:
while a<b do
if c<d then
x=y+z
else
x=y-z
3AC generated is
L1: if a<b goto L2
goto Lnext
L2: if c<d goto L3
goto L4
L3: t1:=y+z
x:=t1
goto L1
L4: t4:=y-z
x:=t2
goto L1
Lnext:

14. a. i. Explain peephole organization with example. (8)


Peephole optimization is a simple and effective technique for locally improving
target code. This technique is applied to improve the performance of the target program
by examining the short sequence of target instructions and replacing these instructions by
shorter or faster sequence, whenever is possible.
Peep hole is a small, moving window on the target program.
• Local in nature
• Pattern driven
• Limited by the size of the window
Characteristics of peephole optimization:
· Redundant instruction elimination
· Flow of control optimization
· Algebraic simplification
· Use of machine idioms
• Constant Folding
x := 32
x := x + 32 becomes x := 64
• Unreachable Code
An unlabeled instruction immediately following an unconditional jump is removed.

- 14 -
NOV/DEC-'08/CS1352-Answer Key

goto L2
x := x + 1  unneeded
• Flow of control optimizations
Unnecessary jumps are eliminated.
goto L1

L1: goto L2 becomes goto L2
• Algebraic Simplification
x := x + 0  unneeded
• Dead code elimination
x := 32  where x not used after statement
y := x + y  y := y + 32
• Reduction in strength
Replace expensive operations by equivalent cheaper ones
x := x * 2  x := x + x

ii. Explain the DAG representation of the basic block with an example. (8)
A DAG for a basic block is a directed acyclic graph in which (2)
 leaves are labeled by unique ids, either variable names or constants
 Interior nodes are operators
 Nodes are also given a sequence of ids for labels to store the computed values.
It is useful for implementing transformations on basic blocks and shows how values
computed by a statement are used in subsequent statements.
e.g t1:=4*i
t2:=a[t1]
Dag is
[] t2

t1
*
a

4 i
Algorithm for the construction of DAG: (4)
Input: A basic block
Output: DAG for that basic block, having
 Label for each node where leaves are identifiers, interior nodes are operator
symbol.
 for each node, a list of identifiers to hold computed values
1) x = y op z 2) x = op y 3) x = y
Step 1: If node(y) is undefined, create a leaf labeled y and let node(y) be this node. In 1),
if node(z) is undefined, create a leaf labeled z and let that leaf be node(z)
Step 2: For 1), create node op with left child y and right child z, after checking for
common sub expression
For 2), check for a node op with a child y. If not create such node
For 3), let n be node y.

- 15 -
NOV/DEC-'08/CS1352-Answer Key

Step 3: Delete x from the list of identifiers for node x. Append x to the list of attached
identifiers for node n found in step 2 and set node x to n
Applications of DAG: (2)
· Determining the common sub-expressions.
· Determining which identifiers have their values used in the block
· Determining which statements compute values that could be used outside the block
· Simplifying the list of quadruples by eliminating the common sub-expressions and not
performing the assignment of the form x: = y unless and until it is a must.

(OR)

b. i. What is a three address code? What are its types? How is it implemented? (12)
It is one of the intermediate representations. It is a sequence of statements of the
form x:= y op z, where x, y, and z are names, constants or compiler-generated
temporaries and op is an operator which can be arithmetic or a logical operator. E.g.
x+y*z is translated as t1=y*z and t2=x+t1. (2)
Reason for the term three-address code is that each statement usually contains
three addresses, two for the operands and one for the result. (2)

Common three address statements: (4)


 x:=y op z (assignment statements)
 x:= op y (assignment statements)
 x:=y (copy statements)
 goto L (unconditional jump)
 Conditional jumps like if x relop y goto L
 param x, call p,n and return y for procedure calls
 indexed assignments x:=y[i] and x[i]:= y
 address and pointer assignments x:=&y, x:=*y and *x:=y
Implementation: (4)
 Quadruples
Record with four fields, op, arg1, arg2 and result
 Triples
Record with three fields, op, arg1, arg2 to avoid entering temporary names
into symbol table. Here, refer the temporary value by the position of the statement
that computes it.
 Indirect triples
List the pointers to triples rather than listing the triples
For a: = b* -c + b * -c
Quadruples
Op arg1 arg2 result
(0) uminus c t1
(1) * b t1 t2
(2) uminus c t3
(3) * b t3 t4
(4) + t2 t4 t5
(5) := t5 a

- 16 -
NOV/DEC-'08/CS1352-Answer Key

Triples
Op arg1 arg2
(0) uminus c
(1) * b (0)
(2) uminus c
(3) * b (2)
(4) + (1) (3)
(5) assign a (4)

Indirect Triples
Op arg1 arg2 Statement
(14) uminus c (0) (14)
(15) * b (14) (1) (15)
(16) uminus c (2) (16)
(17) * b (16) (3) (17)
(18) + (15) (17) (4) (18)
(19) assign a (18) (5) (19)

ii. Construct the DAG for the following basic block:


D:=B*C; E:=A+B; B:=B*C; A:=E-D (4)

15. a. i. Explain about code generation algorithm. (8)


It generates target code for a sequence of three address statements. (2)
Assumptions:
 For each operator in three address statement, there is a corresponding target
language operator.
 Computed results can be left in registers as long as possible.
E.g. a=b+c:
 Add Rj,Ri where Ri has b and Rj has c and result in Ri. Cost=1;
 Add c, Ri where Ri has b and result in Ri. Cost=2;
 Mov c, Rj; Add Rj, Ri; Cost=3;

Register descriptor: Keeps track of what is currently in each register


Address descriptor: Keeps tracks of the location where the current value of the name
can be found at run time. (2)

- 17 -
NOV/DEC-'08/CS1352-Answer Key

Code generation algorithm: For x= y op z (2)


 Invoke the function getreg to determine the location L, where the result of y
op z should be stored (register or memory location)
 Check the address descriptor for y to determine y’
 Generate the instruction op z’, L where z’ is the current location of z
 If the current values of y and/or z have no next uses, alter register descriptor
Getreg: (2)
 If y is in a register that holds the values of no other names and y is not live,
return register of y for L
 If failed, return empty register
 If failed, if X has next use, find an occupied register and empty it
 If X is not used in the block, or suitable register is found, select memory
location of x as L

ii. Explain the various storage allocation strategies. (8)


• Static allocation lays out storage for all data objects during compile time
• Stack allocation manages the run-time storage as a stack
• Heap allocation allocates and deallocates storages as needed at runtime from heap
area
Static allocation: (2)
• Names are bound to storage at compile time
• No need for run-time support package
• When a procedure is activated, its names are bound to same storage location.
• Compiler must decide where activation records should go.
Limitations:
 size must be known at compile time
 recursive procedures are restricted
 data structures cant be created dynamically
Stack allocation: (3)
 Activation records are pushed and popped as activations begin and end.
 Locals are bound to fresh storage in each activation and deleted when the
activation ends.
 Call sequence and return sequence
 caller and callee
 Dangling references
Heap allocation: (3)
Stack allocation cannot be used if either of the following is possible:
1. The values of local names must be retained when an activation ends
2. A called activation outlives the caller.
 Allocate pieces of memory for activation records, which can be deallocated in any
order
 Maintain linked list of free blocks
 Fill a request for size ‘s’ with a block of size s’, where s’ is the smallest size
greater than or equal to s
 Use heap manager, which takes care of defragmentation and garbage collection.
(OR)

- 18 -
NOV/DEC-'08/CS1352-Answer Key

b. Why do we need code optimization? Explain the principal sources of


optimization.
Code optimization is needed to make the code run faster or take less space or both. (2)
Function preserving transformations:
 Common sub expression elimination
 Copy propagation
 Dead-code elimination
 Constant folding
Common sub expression elimination: (3)
E is called as a common sub expression if E was previously computed and the
values of variables in E have not changed since the previous computation.
Copy propagation: (2)
Assignments of the form f:=g is called copy statements or copies in short. The
idea here is use g for f wherever possible after the copy statement.
Dead code elimination: (3)
A variable is live at a point in the program if its value can be used subsequently.
Otherwise dead. Deducing at compile time that the value of an expression is a constant
and using the constant instead is called constant folding.
Loop optimization: (6)
 Code motion: Moving code outside the loop
Takes an expression that yields the same result independent of the number of
times a loop is executed (a loop-invariant computation) and place the expression before
the loop.
 Induction variable elimination
 Reduction in strength: Replacing an expensive operation by a cheaper one.

- 19 -

You might also like