Notes Compile Complete
Notes Compile Complete
Notes Compile Complete
CONTENTS
Lecture-1 Introduction to compiler & its phases
Lecture-2 Overview of language processing system
Lecture-3 Phases of a Compiler
Lecture-4 Languages
Lecture-5 Converting RE to NFA (Thomson Construction)
Lecture-6 Lexical Analysis
Lecture-7 Lexical Analyzer Generator
Lecture-8 Basics of Syntax Analysis
Lecture-9 Context-Free Grammar
Lecture-10 Left Recursion
Lecture-11 YACC
Lecture-12 Top-down Parsing
Lecture-13 Recursive Predictive Parsing
Lecture-14 Non-recursive Predictive Parsing-LL(1)
Lecture-15 LL(1) Grammar
Lecture-16 Basics of Bottom-up parsing
Lecture-17 Conflicts during shift-reduce parsing
Lecture-18 Operator precedence parsing
Lecture-19 LR Parsing
Letcure-20 Construction of SLR parsing table
Lecture-21 Construction of canonical LR(0) collection
Lecture-22 Shift-Reduce & Reduce-Reduce conflicts
Lecture-23 Construction of canonical LR(1) collection
Lecture-24 Construction of LALR parsing table
Lecture-25 Using ambiguous grammars
Lecture-26 SYNTAX-DIRECTED TRANSLATION
Lecture-27 Translation of Assignment Statements
Lecture-28 Generating 3-address code for Numerical Representation of Boolean expressions
Lecture-29 Statements that Alter Flow of Control
Lecture-30 Postfix Translations
Lecture-31 Array references in arithmetic expressions
Lecture-32 SYMBOL TABLES
Lecture-33 Intermediate Code Generation
Lecture-34 Directed Acyclic Graph
Lecture-35 Flow of control statements with Jump method
Lecture-36 Backpatching
Lecture-37 RUN TIME ADMINISTRATION
Lecture-38 Storage Organization
Lecture-39 ERROR DETECTION AND RECOVERY
Lecture-40 Error Recovery in Predictive Parsing
Lecture-41 CODE OPTIMIZATION
Lecture-42 Local Optimizations
Module-1
Lecture #1
INTRODUCTION TO COMPILERS AND ITS PHASES
A compiler is a program takes a program written in a source language and translates it into an
equivalent program in a target language. The source language is a high level language and target
language is machine language.
Necessity of compiler
Techniques used in a lexical analyzer can be used in text editors, information retrieval
system, and pattern recognition programs.
Techniques used in a parser can be used in a query processing system such as SQL.
Many software having a complex front-end may need techniques used in compiler design.
A symbolic equation solver which takes an equation as input. That program should parse
the given input equation.
Most of the techniques used in compiler design can be used in Natural Language
Processing (NLP) systems.
Properties of Compiler
a) Correctness
i) Correct output in execution.
ii) It should report errors
iii) Correctly report if the programmer is not following language syntax.
b) Efficiency
c) Compile time and execution.
d) Debugging / Usability.
Compiler Interpreter
1. It translates the whole program at a 1. It translate statement by statement.
time. 2. Interpreter is slower.
2. Compiler is faster. 3. Debugging is easy.
3. Debugging is not easy. 4. Interpreter are portable.
4. Compilers are not portable.
Types of compiler
1) Native code compiler
A compiler may produce binary output to run /execute on the same computer and operating
system. This type of compiler is called as native code compiler.
2) Cross Compiler
A cross compiler is a compiler that runs on one machine and produce object code for
another machine.
3) Bootstrap compiler
If a compiler has been implemented in its own language . self-hosting compiler.
4) One pass compiler
The compilation is done in one pass over the source program, hence the compilation is
completed very quickly. This is used for the programming language PASCAL, COBOL,
FORTAN.
5) Multi-pass compiler(2 or 3 pass compiler)
In this compiler , the compilation is done step by step . Each step uses the result of the
previous step and it creates another intermediate result.
Example:- gcc , Turboo C++
6) JIT Compiler
This compiler is used for JAVA programming language and Microsoft .NET
7) Source to source compiler
It is a type of compiler that takes a high level language as a input and its output as high
level language. Example Open MP
List of compiler
1. Ada compiler
2. ALGOL compiler
3. BASIC compiler
4. C# compiler
5. C compiler
6. C++ compiler
7. COBOL compiler
8. Smalltalk comiler
9. Java compiler
Lecture #2
ASSEMBLER
Programmers found it difficult to write or read programs in machine language. They
begin to use a mnemonic (symbols) for each machine instruction, which they would
subsequently translate into machine language. Such a mnemonic machine language is
now called an assembly language. Programs known as assembler were written to
automate the translation of assembly language in to machine language. The input to an
assembler program is called source program, the output is a machine language translation
(object program).
INTERPRETER
An interpreter is a program that appears to execute a source program as if it were
machine language
Languages such as BASIC, SNOBOL, LISP can be translated using interpreters. JAVA
also uses interpreter. The process of interpretation can be carried out in following phases.
1. Lexical analysis
2. Synatx analysis
3. Semantic analysis
4. Direct Execution
Advantages
Modification of user program can be easily made and implemented as
execution proceeds.
Type of object that denotes a various may change dynamically.
Debugging a program and finding errors is simplified task for a program used
for interpretation.
The interpreter for the language makes it machine independent.
Disadvantages
The execution of the program is slower.
Memory consumption is more.
Each phase transforms the source program from one representation into another representation.
They communicate with error handlers and the symbol table.
Lexical Analyzer
Lexical Analyzer reads the source program character by character and returns the tokens
of the source program.
A token describes a pattern of characters having same meaning in the source program.
(such as identifiers, operators, keywords, numbers, delimiters and so on)
Example:
• A Syntax Analyzer creates the syntactic structure (generally a parse tree) of the given program.
• A syntax analyzer is also called a parser.
• A parse tree describes a syntactic structure.
Example:
For the line of code newval := oldval + 12, parse tree will be:
assignment
identifier := expression
identifier number
oldval 12
Example:
• Depending on how the parse tree is created, there are different parsing techniques.
• These parsing techniques are categorized into two groups:
– Top-Down Parsing,
– Bottom-Up Parsing
• Top-Down Parsing:
– Construction of the parse tree starts at the root, and proceeds towards the leaves.
– Efficient top-down parsers can be easily constructed by hand.
– Recursive Predictive Parsing, Non-Recursive Predictive Parsing (LL Parsing).
• Bottom-Up Parsing:
– Construction of the parse tree starts at the leaves, and proceeds towards the root.
– Normally efficient bottom-up parsers are created with the help of some software tools.
– Bottom-up parsing is also known as shift-reduce parsing.
– Operator-Precedence Parsing – simple, restrictive, easy to implement
– LR Parsing – much general form of shift-reduce parsing, LR, SLR, LALR
Semantic Analyzer
A semantic analyzer checks the source program for semantic errors and collects the type
information for the code generation.
Type-checking is an important part of semantic analyzer.
Normally semantic information cannot be represented by a context-free language used in
syntax analyzers.
Context-free grammars used in the syntax analysis are integrated with attributes (semantic
rules). The result is a syntax-directed translation and Attribute grammars
Example:
In the line of code newval := oldval + 12, the type of the identifier newval must match with type of
the expression (oldval+12).
A compiler may produce an explicit intermediate codes representing the source program.
These intermediate codes are generally machine architecture independent. But the level of
intermediate codes is close to the level of machine codes.
Example:
newval := oldval * fact + 1
Code Optimizer
The code optimizer optimizes the code produced by the intermediate code generator in the terms of
time and space.
Example:
The above piece of intermediate code can be reduced as follows:
Code Generator
Example:
Assuming that we have architecture with instructions that have at least one operand as a machine
register, the Final Code our line of code will be:
MOVE id2, R1
MULT id3, R1
ADD #1, R1
MOVE R1, id1
Ex:-
Phases of a compiler are the sub-tasks that must be performed to complete the compilation
process. Passes refer to the number of times the compiler has to traverse through the entire
program.
Terminology
– sn = s s s .. s ( n times) s0 = ε
Operations on Languages
• Kleene Closure: L* =
• Positive Closure: L+ =
Examples:
• L1 = {a,b,c,d} L2 = {1,2}
• L1L2 = {a1,a2,b1,b2,c1,c2,d1,d2}
• L1 ∪ L2 = {a,b,c,d,1,2}
3
Regular Expressions
a∈ Σ {a}
• (r)+ = (r)(r)*
• (r)? = (r) | ε
• We may remove parentheses by using precedence rules.
– * highest
– concatenation next
– | lowest
• ab*|c means (a(b)*)|(c)
Examples:
– Σ = {0,1}
– 0|1 = {0,1}
– (0|1)(0|1) = {00,01,10,11}
– 0* = {ε ,0,00,000,0000,....}
Finite Automata
A recognizer for a language is a program that takes a string x, and answers “yes” if x is a
sentence of that language, and “no” otherwise.
We call the recognizer of the tokens as a finite automaton.
A finite automaton can be: deterministic (DFA) or non-deterministic (NFA)
This means that we may use a deterministic or non-deterministic automaton as a lexical
analyzer.
Both deterministic and non-deterministic finite automaton recognize regular sets.
Which one?
– deterministic – faster recognizer, but it may take more space
– non-deterministic – slower, but it may take less space
– Deterministic automatons are widely used lexical analyzers.
First, we define regular expressions for tokens; Then we convert them into a DFA to get a
lexical analyzer for our tokens.
Transition Graph
Example:
Transition Function:
a B
0 1 0
1 1 2
2 1 0
Note that the entries in this function are single value and not set of values (unlike NFA).
Lecture #5 \
Converting RE to NFA (Thomson Construction)
N(r1) and N(r2) are NFAs for regular expressions r1 and r2.
We merge together NFA states by looking at them from the point of view of the input characters:
From the point of view of the input, any two states that are connected by an -transition may as
well be the same, since we can move from one to the other without consuming any character.
Thus states which are connected by an -transition will be represented by the same states in the
DFA.
If it is possible to have multiple transitions based on the same symbol, then we can regard a
transition on a symbol as moving from a state to a set of states (ie. the union of all those states
reachable by a transition on the current symbol). Thus these states will be combined into a single
DFA state.
To perform this operation, let us define two functions:
• The -closure function takes a state and returns the set of states reachable from it based
on (one or more) -transitions. Note that this will always include the state tself.
We should be able to get from a state to any state in its -closure without consuming any input.
• The function move takes a state and a character, and returns the set of states reachable
by one transition on this character.
We can generalize both these functions to apply to sets of states by taking the union of the
application to individual states.
For Example, if A, B and C are states, move ({A,B,C},`a') = move(A, `a') U move(B, `a') U move
(C,`a').
begin
mark S1
for each input symbol a do begin
S2<-ε-closure(move(S1,a))
if (S2 is not in DS) then add S2 into DS as an unmarked state transfunc[S1,a]<-S2
end
End
ε-closure(move(S1,b)) = ε-closure({5}) = {1,2,4,5,6,7}
= S2 transfunc[S1,a]<-S1 transfunc[S1,b]<-S2
⇓mark S2
Lexical Analyzer reads the source program character by character to produce tokens.
Normally a lexical analyzer does not return a list of tokens at one shot; it returns a token when the
parser asks a token from it.
Token represents a set of strings described by a pattern. For example, an identifier represents a set
of strings which start with a letter continues with letters and digits. The actual string is called as
lexeme.
Since a token can represent more than one lexeme, additional information should be held for that
specific lexeme. This additional information is called as the attribute of the token.
For simplicity, a token may have a single attribute which holds the required information for that
token. For identifiers, this attribute is a pointer to the symbol table, and the symbol table holds the
actual attributes for that token.
• Examples:
– <identifier, attribute> where attribute is pointer to the symbol table
– <assignment operator> no attribute is needed
– <number, value> where value is the actual value of the number
Pattern:
A pattern is a description of the form that the lexemes of a token may take. In the case of a keyword as a
token, the pattern is just the sequence of characters that form the keyword. For identifiers and some other
tokens, the pattern is a more complex structure that is matched by many strings.
Lexeme:
A lexeme is a sequence of characters in the source program that matches the pattern for a token and is
identified by the lexical analyzer as an instance of that token.
Lexical Analysis versus parsing
There are a number of reasons the analysis portion of a compiler is separated into lexical
analysis and parsing (syntax analysis) phases.
Simplicity of design. The separation of lexical and syntactic analysis often allows us to
simplify at least one of these tasks. For example, a parser that had to deal with
comments and whitespace as syntactic units would be considerably more complex than
one that can assume comments and whitespace have already been removed by the
lexical analyzer.
Because of the amount of time taken to process characters and the large number of characters
that must be processed during the compilation of a large source program, specialized buffering
techniques have been developed to reduce the amount of overhead required to process a single
input character. An important scheme involves two buffers that are alternately reloaded.
Fig: Using a Pair of Input Buffers
Once the next lexeme is determined, forward is set to the character at its right end. Then,
after the lexeme is recorded as an attribute value of a token returned to the parser, 1exemeBegin
is set to the character immediately after the lexeme just found. In Fig, forward has passed the
end of the next lexeme, ** (the FORTRAN exponentiation operator), and must be retracted one
position to its left.
Advancing forward requires that first test whether reached the end of one of the
buffers, and if so, must reload the other buffer from the input, and move forward to the
beginning of the newly loaded buffer.
Sentinels
for each character read, we make two tests: one for the end of the buffer, and one to
determine what character is read . We can combine the buffer-end test with the test for the
current character if we extend each buffer to hold a sentinel character at the end. The sentinel is
a special character that cannot be part of the source program, and a natural choice is the character
eof. Note that eof retains its use as a marker for the end of the entire input. Any eof that appears
other than at the end of a buffer means that the input is at an end.
Tokens
LEX is an example of Lexical Analyzer Generator.
Input to LEX
• The input to LEX consists primarily of Auxiliary Definitions and Translation Rules.
• To write regular expression for some languages can be difficult, because their regular
expressions can be quite complex. In those cases, we may use Auxiliary Definitions.
• We can give names to regular expressions, and we can use these names as symbols to define
other regular expressions.
• An Auxiliary Definition is a sequence of the definitions of the form:
d1 → r1
d2 → r 2
.
.
dn → r n
Σ ∪ {d1,d2,...,di-1}
Example:
If we try to write the regular expression representing identifiers without using regular definitions, that
regular expression will be complex.
(A|...|Z|a|...|z) ( (A|...|Z|a|...|z) | (0|...|9) ) *
Example:
opt-fraction → ( . digits ) ?
• Translation Rules comprise of a ordered list Regular Expressions and the Program Code to be
executed in case of that Regular Expression encountered.
R1 P1
R2 P2
.
.
Rn Pn
• The list is ordered i.e. the RE’s should be checked in order. If a string matches more than one
RE, the RE occurring higher in the list should be given preference and its Program Code is
executed.
Implementation of LEX
• The Regular Expressions are converted into NFA’s. The final states of each NFA correspond to
some RE and its Program Code.
• Different NFA’s are then converted to a single NFA with epsilon moves. Each final state of the NFA
corresponds one-to-one to some final state of individual NFA’s i.e. some RE and its Program Code.
The final states have an order according to the corresponding RE’s. If more than one final state is
entered for some string, then the one that is higher in order is selected.
• This NFA is then converted to DFA. Each final state of DFA corresponds to a set of states (having
at least one final state) of the NFA. The Program Code of each final state (of the DFA) is the
program code corresponding to the final state that is highest in order out of all the final states in the
set of states (of NFA) that make up this final state (of DFA).
Example:
AUXILIARY DEFINITIONS
(none)
TRANSLATION RULES
a {Action1}
abb{Action2}
a*b+{Action2}
First we construct an NFA for each RE and then convert this into a single NFA:
This NFA is now converted into a DFA. The transition table for the above DFA is as follows:
• Syntax Analyzer creates the syntactic structure of the given source program.
• This syntactic structure is mostly a parse tree.
• Syntax Analyzer is also known as parser.
• The syntax of a programming is described by a context-free grammar (CFG). We will use BNF
(Backus-Naur Form) notation in the description of CFGs.
• The syntax analyzer (parser) checks whether a given source program satisfies the rules implied
by a context-free grammar or not.
– If it satisfies, the parser creates the parse tree of that program.
– Otherwise the parser gives the error messages.
• A context-free grammar
– gives a precise syntactic specification of a programming language.
– the design of the grammar is an initial phase of the design of a compiler.
– a grammar can be directly converted into a parser by some tools.
Parser
– A finite set of terminals (in our case, this will be the set of tokens)
– A finite set of non-terminals (syntactic-variables)
– A finite set of productions rules in the following form
Derivations
Example:
(a) E → E + E | E – E | E * E | E / E | - E
(b) E → ( E)
(c) E → id
At each derivation step, we can choose any of the non-terminal in the sentential form of G
for the replacement.
If we always choose the left-most non-terminal in each derivation step, this derivation is
called as left-most derivation.
Example:
If we always choose the right-most non-terminal in each derivation step, this derivation is
called as right-most derivation.
Example:
We will see that the top-down parsers try to find the left-most derivation of the given source
program.
We will see that the bottom-up parsers try to find the right-most derivation of the given
source program in the reverse order.
Parse Tree
Example:
Ambiguity
A grammar produces more than one parse tree for a sentence is called as an ambiguous
grammar.
For the most parsers, the grammar must be unambiguous.
Unambiguous grammar
Unique selection of the parse tree for a sentence
We should eliminate the ambiguity in the grammar during the design phase of the compiler.
An unambiguous grammar should be written to eliminate the ambiguity.
We have to prefer one of the parse trees of a sentence (generated by an ambiguous
grammar) to disambiguate that grammar to restrict to this choice.
Ambiguous grammars (because of ambiguous operators) can be disambiguated according
to the precedence and associativity rules.
Example:
^ (right to left)
* (left to right)
+ (left to right)
T → T*F | F
F → G^F | G
G → id | (E)
Lecture #10
Left Recursion
• So, we have to convert our left-recursive grammar into an equivalent grammar which is not left-
recursive.
• The left-recursion may appear in a single step of the derivation (immediate left-recursion), or may
appear in more than one step of the derivation.
Immediate Left-Recursion
A → β A’
A’ → α A’ | ε an equivalent grammar
In general,
A → A α1 | ... | A αm | β1 | ... | βn where β1 ... βn do not start with A
A → β1 A’ | ... | βn A’
Example:
E → T E’
E’ → +T E’ | ε
T → F T’
T’ → *F T’ | ε
F → id | (E)
Example:
S → Aa | b
A → Sc | d
S ⇒ Aa ⇒ Sca
Or
A ⇒ Sc ⇒ Aac
causes to a left-recursion
Elimination
for i from 1 to n do {
for j from 1 to i-1 do {
replace each production
Ai → Aj γ
by
Ai → α1 γ | ... | αk γ
where Aj → α1 | ... | αk
}
eliminate immediate left-recursions among Ai productions
}
Example:
S → Aa | b
A → Ac | Sd | f
for A:
- Replace A → Sd with A → Aad | bd
A A → bdA’ | fA’
A’ → cA’ | adA’ | ε
S → Aa | b
A → bdA’ | fA’
A’ → cA’ | adA’ | ε
for A:
- we do not enter the inner loop.
- Eliminate the immediate left-recursion in A
A → SdA’ | fA’
A’ → cA’ | ε
for S:
- Replace S → Aa with S → SdA’a | fA’a
S → fA’aS’ | bS’
S’ → dA’aS’ | ε
So, the resulting equivalent grammar which is not left-recursive is: S → fA’aS’ | bSS’ → dA’aS’ | ε
A → SdA’ | fA’
A’ → cA’ | ε
Left Factoring
• A predictive parser (a top-down parser without backtracking) insists that the grammar must
be left-factored.
A → βα1 | βα2
where α is non-empty and the first symbols of β1 and β2 (if they have one)are different.
A to βα1 or
A to βα2
10.1 Algorithm
• For each non-terminal A with two or more alternatives (production rules) with a common
non-empty prefix, let say
A → αA’ | γ1 | ... | γm
A’ → β1 | ... | βn
Example:
A → abB | aB | cdg | cdeB | cdfB
A’ → bB | B
A → aA’ | cdA’’
A’ → bB | B
A’’ → g | eB | fB
Example:
A → ad | a | ab | abc | b
A → aA’ | b
A’ → d | ε | b | bc
A → aA’ | b
A’ → d | ε | bA’’
A’’ → ε | c
Lecture #11
YACC
YACC generates C code for a syntax analyzer, or parser. YACC uses grammar rules that allow it to
analyze tokens from LEX and create a syntax tree. A syntax tree imposes a hierarchical structure on
tokens. For example, operator precedence and associativity are apparent in the syntax tree. The
next step, code generation, does a depth-first walk of the syntax tree to generate code. Some
compilers produce machine code, while others output assembly.
YACC takes a default action when there is a conflict. For shift-reduce conflicts, YACC will shift.
For reduce-reduce conflicts, it will use the first rule in the listing. It also issues a warning message
whenever a conflict exists. The warnings may be suppressed by making the grammar unambiguous.
Input to YACC is divided into three sections. The definitions section consists of token declarations,
and C code bracketed by “%{“ and “%}”. The BNF grammar is placed in the rules section, and user
subroutines are added in the subroutines section.
Lecture #12
TOP-DOWN PARSING
• Backtracking is needed.
• It tries to find the left-most derivation.
Example:
Predictive Parser
• When re-writing a non-terminal in a derivation step, a predictive parser can uniquely choose a
production rule by just looking the current symbol in the input string.
Example:
stmt → if ...... |
while ...... |
begin ...... |
for .....
When we are trying to write the non-terminal stmt, we have to choose first production rule.
When we are trying to write the non-terminal stmt, we can uniquely choose the production rule by
just looking the current token.
We eliminate the left recursion in the grammar, and left factor it. But it may not be
suitable for predictive parsing (not LL (1) grammar).
Lecture #13
Recursive Predictive Parsing
Example:
A → aBb | bAB
proc A {
case of the current token {
‘a’: - match the current token with a, and move to the next token;
- call ‘B’;
- match the current token with b, and move to the next token;
‘b’: - match the current token with b, and move to the next token;
- call ‘A’;
- call ‘B’;
}
}
A → aA | bB | ε
If all other productions fail, we should apply an ε-production. For example, if the current token is not
a or b, we may apply the ε-production.
Most correct choice: We should apply an ε-production for a non-terminal A when the current token is
in the follow set of A (which terminals can follow A in the sentential forms).
Example:
A → aBe | cBd | C
B → bB | ε
C→f
proc A {
case of the current token {
a: - match the current token with a and move to the next token;
- call B;
- match the current token with e and move to the next token;
c: - match the current token with c and move to the next token;
- call B;
- match the current token with d and move to the next token;
f: - call C //First Set of C
}
}
proc C {
}
match the current token with f and move to the next token;
proc B {
}
Lecture #14
Non-Recursive Predictive Parsing – LL (1) Parser
input buffer
stack
– contains the grammar symbols
– at the bottom of the stack, there is a special end marker symbol $.
– initially the stack contains only the symbol $ and the starting symbol S. ($S<-initial stack)
– when the stack is emptied (i.e. only $ left in the stack), the parsing is completed.
parsing table
– a two-dimensional array M[A,a]
– each row is a non-terminal symbol
– each column is a terminal symbol or the special symbol $
– each entry holds a production rule.
Parser Actions
The symbol at the top of the stack (say X) and the current symbol in the input string (say a)
determine the parser action. There are four possible parser actions.
• If X is a non-terminal
• parser looks at the parsing table entry M[X,a]. If M[X,a] holds a production rule
X→Y1Y2...Yk, it pops X from the stack and pushes Yk,Yk-1,...,Y1 into the stack. The parser
also outputs the production rule X→Y1Y2...Yk to represent a step of the derivation.
Example:
For the Grammar is S → aBa; B → bB | ε and the following LL(1) parsing table:
A B $
S S → aBa
B B→ε B → bB
• Two functions are used in the construction of LL(1) parsing tables -FIRST & FOLLOW
• FIRST(α) is a set of the terminal symbols which occur as first symbols in strings derived from
α where α is any string of grammar symbols.
• if α derives to ε, then ε is also in FIRST(α) .
• FOLLOW(A) is the set of the terminals which occur immediately after (follow) the non-terminal
A in the strings derived from the starting symbol.
– A terminal a is in FOLLOW(A) if S ⇒ αAaβ
– $ is in FOLLOW(A) if S ⇒ αA
Example:
E → TE’
E’ → +TE’ | ε
T → FT’
T’ → *FT’ | ε
F → (E) | id
FIRST(F) = {(,id}
FIRST(T’) = {*, ε}
FIRST(T) = {(,id}
FIRST(E’) = {+, ε}
FIRST(E) = {(,id}
FIRST(TE’) = {(,id}
FIRST(+TE’ ) = {+}
FIRST(ε) = {ε}
FIRST(FT’) = {(,id}
FIRST(*FT’) = {*}
FIRST((E)) = {(}
FIRST(id) = {id}
FOLLOW(E) = { $, ) }
FOLLOW(E’) = { $, ) }
FOLLOW(T) = { +, ), $ }
FOLLOW(T’) = { +, ), $ }
FOLLOW(F) = {+, *, ), $ }
id + * ( ) $
E E → TE’ E → TE’
E’ E’ → +TE’ E’ → ε E’ → ε
T T → FT’ T → FT’
T’ T’ → ε T’ → *FT’ T’ → ε T’ → ε
F F → id F → (E)
Lecture #15
LL(1) Grammars
• A grammar whose parsing table has no multiply-defined entries is said to be LL(1) grammar.
• The parsing table of a grammar may contain more than one production rule. In this case, we say
that it is not a LL(1) grammar.
• A grammar G is LL(1) if and only if the following conditions hold for two distinctive production
rules A → α and A → β:
1. Both α and β cannot derive strings starting with same terminals.
2. At most one of α and β can derive to ε.
3. If β can derive to ε, then α cannot derive to any string starting with a terminal in
FOLLOW(A).
Example:
S→iCtSE | a
E→eS | ε
C→b
FOLLOW(S) = { $,e }
FOLLOW(E) = { $,e }
FOLLOW(C) = { t }
a b e i t $
FIRST(iCtSE) = {i} S→a S → iCtSE
S
FIRST(a) = {a}
FIRST(eS) = {e} E→eS
E E→ε E→ε
FIRST(ε) = {ε}
FIRST(b) = {b}
C C→b
• What do we have to do it if the resulting parsing table contains multiply defined entries?
– If we didn’t eliminate left recursion, eliminate the left recursion in the grammar.
– If the grammar is not left factored, we have to left factor the grammar.
– If its (new grammar’s) parsing table still contains multiply defined entries, that grammar is
ambiguous or it is inherently not a LL(1) grammar.
• A left recursive grammar cannot be a LL(1) grammar.
– A → Aα | β
->any terminal that appears in FIRST(β) also appears FIRST(Aα) because Aα ⇒ βα.
->If β is ε, any terminal that appears in FIRST(α) also appears in FIRST(Aα) and
FOLLOW(A).
• A bottom-up parser creates the parse tree of the given input starting from leaves towards the
root.
• A bottom-up parser tries to find the right-most derivation of the given input in the reverse order.
(a) S ⇒ ... ⇒ ω (the right-most derivation of ω)
(b) ← (the bottom-up parser finds the right-most derivation in the reverse
order)
• Bottom-up parsing is also known as shift-reduce parsing because its two main actions are shift
and reduce.
– At each shift action, the current symbol in the input string is pushed to a stack.
– At each reduction step, the symbols at the top of the stack (this symbol sequence is the
right side of a production) will replaced by the non-terminal at the left side of that production.
– There are also two more actions: accept and error.
Shift-Reduce Parsing
• A shift-reduce parser tries to reduce the given input string into the starting symbol.
• At each reduction step, a substring of the input matching to the right side of a production rule is
replaced by the non-terminal at the left side of that production rule.
• If the substring is chosen correctly, the right most derivation of that string is created in the reverse
order.
Example:
For Grammar S → aABb; A → aA | a; B → bB | b and Input string aaabb,
aaabb
⇒ aaAbb
⇒ aAbb
⇒ aABb
⇒ S
The above reduction corresponds to the following rightmost derivation: S ⇒ aABb ⇒ aAbb ⇒ aaAbb
⇒ aaabb
Handle
• Informally, a handle of a string is a substring that matches the right side of a production rule.
- But not every substring that matches the right side of a production rule is handle.
• A handle of a right sentential form γ (≡ αβω) is a production rule A → β and a position of γ where
the string β may be found and replaced by A to produce the previous right-sentential form in a
rightmost derivation of γ.
S ⇒ αAω ⇒ αβω
• If the grammar is unambiguous, then every right-sentential form of the grammar has exactly one
handle.
• We will see that ω is a string of terminals.
input string
• Start from γn, find a handle An→βn in γn, and replace βn in by An to get γn-1.
• Then find a handle An-1→βn-1 in γn-1, and replace βn-1 in by An-1 to get γn-2.
• Repeat this, until we reach S.
Example:
E → E+T | T
T → T*F | F
F → (E) | id
Stack Implementation
Example:
$ id+id*id$ shift
$id +id*id$ reduce by F → id
$F +id*id$ reduce by T → F
$T +id*id$ reduce by E → T
$E +id*id$ shift shift
$E+ id*id$
$E+id *id$ reduce by F → id
• There are context-free grammars for which shift-reduce parsers cannot be used.
• Stack contents and the next input symbol may not decide action:
– shift/reduce conflict: Whether make a shift operation or a reduction.
– reduce/reduce conflict: The parser cannot decide which of several reductions to make.
• If a shift-reduce parser cannot be used for a grammar, that grammar is called as non-LR(k)
grammar.
LR (k)
1. Operator-Precedence Parser
– simple, but only a small class of grammars.
2. LR-Parsers
– Covers wide range of grammars.
• SLR – Simple LR parser
• CLR – most general LR parser (Canonical LR)
• LALR – intermediate LR parser (Look Ahead LR)
– SLR, CLR and LALR work same, only their parsing tables are different.
Lecture #18
Operator Precedence Parsing
• Operator grammar
– small, but an important class of grammars
– we may have an efficient operator precedence parser (a shift-reduce parser) for an operator
grammar.
• In an operator grammar, no production rule can have:
– ε at the right side
– two adjacent non-terminals at the right side.
Examples:
E→AB E→EOE E→E+E |
A→a E→id E*E |
B→b O→+|*|/ E/E | id
not operator grammar not operator grammar operator grammar
Precedence Relations
• The determination of correct precedence relations between terminals are based on the traditional
notions of associativity and precedence of operators. (Unary minus causes a problem).
• The intention of the precedence relations is to find the handle of a right-sentential form,
<. with marking the left end,
=· appearing in the interior of the handle, and
.
> marking the right hand.
• In our input string $a1a2...an$, we insert the precedence relation between the pairs of terminals
(the precedence relation holds between the terminals in that pair).
Example:
• Scan the string from left end until the first .> is encountered.
• Then scan backwards (to the left) over any =· until a < . is encountered.
• The handle contains everything to left of the first .> and to the right of the <. is encountered.
The handles thus obtained can be used to shift reduce a given string.
• The input string is w$, the initial stack is $ and a table holds precedence relations between
certain terminals
Parsing Algorithm
The input string is w$, the initial stack is $ and a table holds precedence relations between certain
terminals.
Example:
2. If operator O1 and operator O2 have equal precedence, they are left-associative ->O1 .> O2
and O2 .> O1 they are right-associative- > O1 <. O2 and O2 <. O1
4. Also, let
(=·) $ <. ( id .> ) ) .> $
( <. ( $ <. id id .> $ ) .> )
( <. id
Example:
The complete table for the Grammar E → E+E | E-E | E*E | E/E | E^E | (E) | -E | id is:
+ - * / ^ id ( ) $
+ .> .> <. <. <. <. <. .> .>
- .> .> <. <. <. <. <. .> .>
* .> .> .> .> <. <. <. .> .>
/ .> .> .> .> <. <. <. .> .>
^ .> .> .> .> <. <. <. .> .>
id .> .> .> .> .> .> .>
( <. <. <. <. <. <. <. =·
) .> .> .> .> .> .> .>
$ <. <. <. <. <. <. <.
Operator-Precedence Grammars
There is another more general way to compute precedence relations among terminals:
1. a = b if there is a right side of a production of the form αaβbγ, where β is either a single non-
terminal or ε.
2. a < b if for some non-terminal A there is a right side of the form αaAβ and A derives to γbδ
where γ is a single non-terminal or ε.
3. a > b if for some non-terminal A there is a right side of the form αAbβ and A derives to γaδ
where δ is a single non-terminal or ε.
Note that the grammar must be unambiguous for this method. Unlike the previous method, it does
not take into account any other property and is based purely on grammar productions. An
ambiguous grammar will result in multiple entries in the table and thus cannot be used.
• Operator-Precedence parsing cannot handle the unary minus when we also use the binary minus
in our grammar.
• The best approach to solve this problem is to let the lexical analyzer handle this problem.
–The lexical analyzer will return two different operators for the unary minus and the binary
minus.
–The lexical analyzer will need a look ahead to distinguish the binary minus from the unary
minus.
• Then, we make
Precedence Functions
• Compilers using operator precedence parsers do not need to store the table of precedence
relations.
• The table can be encoded by two precedence functions f and g that map terminal symbols to
integers.
• For symbols a and b.
f(a) < g(b) whenever a <. b
f(a) = g(b) whenever a =· b
f(a) > g(b) whenever a .> b
• Advantages:
– simple
– powerful enough for expressions in programming languages
• Disadvantages:
– It cannot handle the unary minus (the lexical analyzer should handle the unary minus).
– Small class of grammars.
– Difficult to decide which language is recognized by the grammar.
Lecture #19
LR PARSING
Parser Configuration
• Sm and ai decides the parser action by consulting the parsing action table. (Initial Stack
contains just So )
1. shift s -- shifts the next input symbol and the state s onto the stack
( So X1 S1 ... Xm Sm, ai ai+1 ... an $ ) ->( So X1 S1 ... Xm Sm ai s, ai+1 ... an $ )
– pop 2|β| (=r) items from the stack; let us assume that β = Y1Y2...Yr
– then push A and s where s=goto[sm-r,A]
1) E → E+T
2) E → T
3) T → T*F
4) T → F
5) F → (E)
6) F → id
Action Goto
state id + * ( ) $ E T F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
11 r5 r5 r5 r5
The action of the parser would be as follows:
LR(0) Items
• An LR(0) item of a grammar G is a production of G a dot at the some position of the right
side.
Example:
A → aBb
• Sets of LR(0) items will be the states of action and goto table of the SLR parser.
• A collection of sets of LR(0) items (the canonical LR(0) collection) is the basis for
constructing SLR parsers.
Closure Operation
If I is a set of LR(0) items for a grammar G, then closure(I) is the set of LR(0) items constructed
from I by the two rules:
Example:
E’ → E ; E → E+T;
E → T;
T → T*F; T → F;
F → (E);
F → id
If I is a set of LR(0) items and X is a grammar symbol (terminal or non-terminal), then goto(I,X) is
defined as follows:
goto(I,F) = {T → F. }
goto(I,id) = { F → id. }
Lecture #21
Construction of The Canonical LR(0) Collection
To create the SLR parsing tables for a grammar G, we will create the canonical LR(0)
collection of the grammar G’.
Algorithm:
C is { closure({S’→.S}) }
repeat the followings until no more set of LR(0) items can be added to C.
for each I in C and each grammar symbol X
if goto(I,X) is not empty and not in C
add goto(I,X) to C
GOTO function is a DFA on the sets in C. Example:
For grammar used above, Canonical LR(0) items are as follows-
I0: E’ → .E I1: E’ → E. I6: E → E+.T I9: E → E+T.
E → .E+T E → E.+T T → .T*F T → T.*F
E → .T T → .F
T → .T*F I2: E → T. F → .(E) I10: T → T*F.
T → .F T → T.*F F → .id
F → .(E)
F → .id I3: T → F. I7: T → T*.F I11: F → (E).
F → .(E)
I4: F → (.E) F → .id
E → .E+T
E → .T I8: F → (E.)
T → .T*F E → E.+T
T → .F
F → .(E)
F → .id
I5: F → id.
A≠S’.
Example:
• If a state does not know whether it will make a shift operation or reduction for a terminal,
we say that there is a shift/reduce conflict.
Example:
S → .L=R
S→R R → .L
L → id L → .*R R → L. L → .id
R→L L → .id
R → .L I3: S → R.
I7: L → *R.
I4: L → *.R
Problem in I2
R → .L
• If a state does not know whether it will make a reduction operation using the production
rule i or j for a terminal, we say that there is a reduce/reduce conflict.
Example:
S → AaAb I0: S’ → .S
S → BbBa S → .AaAb
A→ε S → .BbBa
B→ε A→.
B→ .
If the SLR parsing table of a grammar G has a conflict, we say that that grammar is not SLR
grammar.
• In SLR method, the state i makes a reduction by Aα→ when the current token is a:
αβ and the state i are on the top stack. This means that making reduction in this
case is not correct.
S → BbBa
LR(1) Item
• To avoid some of invalid reductions, the states need to carry more information.
• Extra information is put into a state by including a terminal symbol as a second component
in an item.
If I is a set of LR(1) items and X is a grammar symbol (terminal or non-terminal), then goto(I,X) is
defined as follows:
Algorithm:
C is { closure({S’→.S,$}) }
repeat the followings until no more set of LR(1) items can be added to C.
for each I in C and each grammar symbol X
if goto(I,X) is not empty and not in C
add goto(I,X) to C
Example:
Parsing Table
Example:
id * = $ S L R
0 S5 s4 1 2 3
1 acc
2 s6 r5
3 r2
4 S5 s4 8 7
5 r4 r4
6 s12 s11 10 9
7 r3 r3
8 r5 r5
9 r1
10 r5
11 s12 s11 10 13
12 r4
13 r3
Lecture #24
Constructing LALR Parsing tables
• This shrink process may introduce a reduce/reduce conflict in the resulting LALR parser.
In that case the grammar is NOT LALR.
• This shrink process cannot produce a shift/reduce conflict.
• The core of a set of LR(1) items is the set of its first component.
Example:
S → L.=R,$ -> S → L.=R R → L.,$ R → L.
• We will find the states (sets of LR(1) items) in a canonical LR(1) parser with same cores.
Then we will merge them as a single state.
Example:
• We will do this for all states of a canonical LR(1) parser to get the states of the LALR
parser.
• In fact, the number of the states of the LALR parser for a grammar will be equal to the
number of states of the SLR parser for that grammar.
Parsing Tables
• Create the canonical LR(1) collection of the sets of LR(1) items for the given grammar.
• Find each core; find all sets having that same core; replace those sets having same cores
with a single set which is their union.
C={I0,...,In} ->C’={J1,...,Jm} where m ≤ n
• Create the parsing tables (action and goto tables) same as the construction of the
parsing tables of LR(1) parser.
– Note that: If J=I1 ∪ ... ∪ Ik since I1,...,Ik have same cores
->cores of goto(I1,X),...,goto(I2,X) must be same.
–So, goto(J,X)=K where K is the union of all sets of items having same cores as
goto(I1,X).
Shift/Reduce Conflict
• We say that we cannot introduce a shift/reduce conflict during the shrink process for the
creation of the states of a LALR parser.
• Assume that we can introduce a shift/reduce conflict. In this case, a state of LALR parser
must have:
A → α.,a and B → β.aγ,b
• This means that a state of the canonical LR(1) parser must have: A → α.,a and
B → β.aγ,c
But, this state has also a shift/reduce conflict. i.e. The original canonical
(Reason for this, the shift operation does not depend on Lookaheads)
Reduce/Reduce Conflict
But, we may introduce a reduce/reduce conflict during the shrink process for the creation of the
states of a LALR parser.
B → β.,b B → β.,c
B → β.,b/c
Example:
For the above Canonical LR Parsing table, we can get the following LALR(1) collection
Lecture #25
Using Ambiguous Grammars
E → E+E | E*E |
(E) | id T → T*F
| F
F → (E) | id
FOLLOW(E) = { $,+,*,) }
State I7 has shift/reduce conflicts for symbols + and *.
when current token is +
shift ->+ is right-associative reduce ->+ is left-associative
when current token is *
shift - > * has higher
precedence than + reduce->+
has higher precedence than *
id + * ( ) $ E
0 s3 s2 1
1 s4 s5 acc
2 s3 s2 6
3 r4 r4 r4 r4
4 s3 s2 7
5 s3 s2 8
6 s4 s5 s9
7 r1 s5 r1 r1
8 r2 r2 r2 r2
9 r3 r3 r3 r3
Module-2
Lecture #26
SYNTAX-DIRECTED TRANSLATION
• Grammar symbols are associated with attributes to associate information with the programming
language constructs that they represent.
• Values of these attributes are evaluated by the semantic rules associated with the production
rules.
• Evaluation of these semantic rules:
– may generate intermediate codes
– may put information into the symbol table
– may perform type checking
– may issue error messages
– may perform some other activities
– In fact, they may perform almost any activities.
• An attribute may hold almost any thing.
A string, a number, a memory location, a complex record.
• Evaluation of a semantic rule defines the value of an attribute. But a semantic rule may also have
some side effects such as printing a value.
Example:
• Intermediate codes are machine independent codes, but they are close to machine instructions.
• The given program in a source language is converted to an equivalent program in an
intermediate language by the intermediate code generator.
• Intermediate language can be many different languages, and the designer of the compiler
decides this intermediate language.
– syntax trees can be used as an intermediate language.
– postfix notation can be used as an intermediate language.
– three-address code (Quadraples) can be used as an intermediate language
• we will use quadraples to discuss intermediate code generation
• quadraples are close to machine instructions, but they are not actual machine instructions.
2 Syntax Tree
Syntax Tree is a variant of the Parse tree, where each leaf represents an operand and each interior
node an operator.
Example:
Postfix Notation
Postfix Notation is another useful form of intermediate code if the language is mostly expressions.
Example:
• We use the term “three-address code” because each statement usually contains three addresses
(two for operands, one for the result).
• The most general kind of three-address code is:
x := y op z
where x, y and z are names, constants or compiler-generated temporaries; op is any operator.
• But we may also the following notation for quadraples (much better notation because it looks like a
machine code instruction)
op y,z,x
apply operator op to y and z, and store the result in x.
4 Representation of three-address codes
Three-address code can be represented in various forms viz. Quadruples, Triples and
Indirect Triples. These forms are demonstrated by way of an example below. Example:
A = -B * (C + D) Three-Address code is as follows:
T1 = -B
T2 = C + D T3 = T1 * T2
A = T3
Quadruple:
Operator Operand 1 Operand 2 Result
(1) - B T1
(2) + C D T2
(3) * T1 T2 T3
(4) = A T3
Triple:
Indirect Triple:
Statement
(0) (56)
(1) (57)
(2) (58)
(3) (59)
E → E1 + E2 E.place = newtemp();
E.code = E1.code || E2.code || gen( E.place = E1.place + E2.place )
E → E1 * E2 E.place = newtemp();
E.code = E1.code || E2.code || gen( E.place = E1.place * E2.place )
E → - E1 E.place = newtemp();
E.code = E1.code || gen( E.place = - E1.place )
Example:
The translation for A or B and C is the three-address sequence:
T1 := B and C T2 := A or T1
Also, the translation of a relational expression such as A < B is the three-address sequence:
o Quadruples are being generated and NEXTQUAD indicates the next available entry in the
quadruple array.
Example:
Here (4) is a true exit and (6) is a false exit of the Boolean expressions.
Lecture #28
Generating 3-address code for Numerical Representation of Boolean expressions
E->E or M E
E->E and M E E->not E
E->( E ) E->id
E->id relop id
M->ε
Example:
For the expression P<Q or R<S and T, the parsing steps and corresponding semantic actions are
shown below. We assume that NEXTQUAD has an initial value of 100.
Step 1: P<Q gets reduced to E by E->id relop id. The grammatical form is E1 or R<S
and T.
Step 2: R<S gets reduced to E by E->id relop id. The grammatical form is E1 or E2 and
T.
E2 is true if goto of 102 is reached and false if goto of 103 is reached. Step 3: T gets reduced to E
by E->id. The grammatical form is E1 or E2 and E3.
We have the following code generated (Makelist).
104: if T goto _
105: goto _
Step 4: E2 and E3 gets reduced to E by E->E and E. The grammatical form is E1 or E4.
We have no new code generated but changes are made in the already generated code (Backpatch).
100: if P<Q goto _
101: goto _
102: if R<S goto 104
103: goto _
104: if T goto _
105: goto _
E4 is true only if E3.TRUE (goto of 104) is reached. E4 is false if E2.FALSE (goto of 103) or
E3.FALSE (goto of 105) is reached (Merge).
We have no new code generated but changes are made in the already generated code (Backpatch).
E is true only if E1.TRUE (goto of 100) or E2.TRUE (goto of 104) is reached (Merge). E is false if
E4.FALSE (goto of 103 or 105) is reached.
o Boolean expressions may in practice contain arithmetic sub expressions e.g. (A+B)>C.
o We can accommodate such sub-expressions by adding the production E->E op E to our
grammar.
o We will also add a new field MODE for E. If E has been achieved after reduction using
the above (arithmetic) production, we make E.MODE = arith, otherwise make E.MODE =
bool.
o If E.MODE = arith, we treat it arithmetically and use E.PLACE. If E.MODE = bool, we treat
it as Boolean and use E.FALSE and E.TRUE.
Lecture #29
Statements that Alter Flow of Control
->In order to implement goto statements, we need to define a LABEL for a statement. A
production can be added for this purpose:
->The semantic action attached with this production is to record the LABEL and its value
(NEXTQUAD) in the symbol table. It will also Backpatch any previous references to this LABEL with
its current value.
->Following grammar can be used to incorporate structured Flow-of-control constructs: (1) S->if E
then S
(2) S->if E then S else S
(3) S->while E do S (4) S->begin L end
(5) S->A
(6) L->L ; S (7) L->S
->We introduce a new field NEXT for S and L like TRUE and FALSE for E. S.NEXT and L.NEXT are
respectively the pointers to a list of all conditional and unconditional jumps to the quadruple
following statement S and statement-list L in execution order.
->We also introduce the marker non-terminal M as in the case of grammar for Boolean expressions.
This is put before statement in if-then, before both statements in if-then-else
and the statement in while-do as we may need to proceed to them after evaluating E. In
case of while-do, we also need to put M before E as we may need to come back to it after executing
S.
->In case of if-then-else, if we evaluate E to be true, first S will be executed. After this we should
ensure that instead of second S, the code after this if-then-else statement be executed. We thus
place another non-terminal marker N after first S i.e. before else.
The grammar now is as follows:
In an production A->α, the translation rule of A.CODE consists of the concatenation of the CODE
translations of the non-terminals in α in the same order as the non-terminals appear in α.
Productions can be factored to achieve Postfix form.
The production
S->for L = E1 step E2 to E3 do S1
Here L is any expression with l-value, usually a variable, called the index. E1, E2 and E3 are
expressions called the initial value, increment and limit, respectively. Semantically, the for statement
is equivalent to the following program.
Begin
INDEX = addr ( L );
*INDEX = E1; INCR = E2; LIMIT = E3;
while *INDEX <= LIMIT do begin
end
end
(1) F->for L
(2) T->F = E1 by E2 to E3 do
(3) S->T S1
Elements of arrays can be accessed quickly if the elements are stored in a block of consecutive
locations.
For a one-dimensional array A:
i*width + (baseA-low*width)
So, the location of A[i] can be computed at the run-time by evaluating the formula i*width+c where c
is (baseA-low*width) which is evaluated at compile-time.
Intermediate code generator should produce the code to evaluate this formula i*width+c
(one multiplication and one addition operation).
A two-dimensional array can be stored in either row-major (row-by-row) or column-major
(column-by-column).
((i1*n2)+i2)*width + (baseA-((low1*n1)+low2)*width)
To evaluate the (( ... ((i1*n2)+i2) ...)*nk+ik portion of this formula, we can use the recurrence
equation:
e1 = i1
em = em-1 * nm + im
The grammar and suitable translation scheme for arithmetic expressions with array references is as
given below:
E → E1 + E2 E.PLACE = NEWTEMP ( )
GEN (E.PLACE = E1.PLACE + E2.PLACE) E → ( E1 )
E.PLACE = E1.PLACE
E→L if (L.OFFSET = NULL) then E.PLACE = L.PLACE
else {E.PLACE = NEWTEMP ( ); GEN (E.PLACE =
L.PLACE[L.OFFSET])}
Here, NDIM denotes the number of dimensions, LIMIT (AARAY, i) function returns the upper limit
along the ith dimension of ARRAY i.e. ni, WIDTH (ARRAY) returns the number of bytes for one
element of ARRAY.
2. Declarations
Following is the grammar and a suitable translation scheme for declaration statements:
Here, ENTER makes the entry into symbol table while ATTR is used to trace the data type.
3. Procedure Calls
Following is the grammar and a suitable translation scheme for Procedure Calls:
switch E
begin
end
case V1: S1 case V2: S2
.
case Vn-1: Sn-1 default: Sn The translation scheme for this shown below:
code to evaluate E into T
goto TEST L1: code for S1
goto NEXT L2: code for S2
goto NEXT
.
.
Ln-1: code for Sn-1 goto NEXT
Ln: code for Sn goto NEXT
TEST: if T = V1 goto L1
If T = V2 goto L2
.
if T = Vn-1 goto Ln-1 goto Ln
Lecture #32
SYMBOL
TABLES
• Symbol table is a data structure meant to collect information about names appearing in the
source program.
• It keeps track about the scope/binding information about names.
• Each entry in the symbol table has a pair of the form (name and information).
• Information consists of attributes (e.g. type, location) depending on the language.
• Whenever a name is encountered, it is checked in the symbol table to see if already occurs. If
not, a new entry is created.
• In some cases, the symbol table record is created by the lexical analyzer as soon as the name
is encountered in the input, and the attributes of the name are entered when the
declarations are processed.
• If same name can be used to denote different program elements in the same block, the
symbol table record is created only when the name’s syntactic role is discovered.
Linear List
• It is the simplest approach in symbol table organization.
• The new names are added to the table in the order they arrive.
• A name is searched for its existence linearly.
• The average number of comparisons required are proportional to 0.5*(n+1) where n=number
of entries in the table.
• It takes less space but more access time.
Search Tree
• It is more efficient than Linear Trees.
• We provide two links- left and right, which point to record in the search tree.
• A new name is added at a proper location in the tree such that it can be accessed
alphabetically.
• For any node name1 in the tree, all names accessible by following the left link precede name1
alphabetically.
• Similarly, for any node name1 in the tree, all names accessible by following the right link
succeed name1 alphabetically.
• The time for adding/searching a name is proportional to (m+n) log2 n.
Hash Table
• A hash table is a table of k-pointers from 0 to k-1 that point to the symbol table and record
within the symbol table.
• To search a value, we find out the hash value of the name by applying suitable hash function.
• The hash function maps the name into an integer value between 0 and k-1 and uses it as an
index in the hash table to search the list of the table records that are built on that hash index.
• To add a non-existent name, we create a record for that name and insert it at the head of the
list.
• Each name possesses a region of validity within the source program called the scope of that
name.
• The rules governing the scope of names in a block-structured language are as follows:
• A name declared within block B is valid only within B.
• If block B1 is nested within B2, then any name that is valid for B2 is also valid for
B1, unless identifier for that name is re-declared in B1.
• These rules require a more complicated symbol table organization that simply a list of
associations between names and attributes.
• One technique is to keep multiple symbol tables for each active block:
• Each table is list of names and their associated attributes, and the tables are organized on stack.
• Whenever a new block is entered, a new table is pushed on the stack.
• When a declaration is compiled, the table on the stack is searched for the name.
• If name is not found it is inserted.
• When a reference is translated, it is searched in all tables starting from top.
• Another technique is to represent scope information in the symbol table.
• Store the nesting depth of each procedure block in the symbol table.
• Use the (procedure name, nesting depth) pair as the key to accessing the
information from the table.
• The nesting depth of a procedure is a number that is obtained by starting with a value of one for
the main and adding one to it every time we go from an enclosing to an enclosed procedure. It
counts the number of procedure in the referencing environment of a procedure.
Lecture#33
Intermediate Code Generation
A source code can directly be translated into its target machine code, then why at all we need to
translate the source code into an intermediate code which is then translated to its target code? Let us
see the reasons why we need an intermediate code.
If a compiler translates the source language to its target machine language without having the
option for generating intermediate code, then for each new machine, a full native compiler is
required.
Intermediate code eliminates the need of a new full compiler for every unique machine by
keeping the analysis portion same for all the compilers.
The second part of compiler, synthesis, is changed according to the target machine.
It becomes easier to apply the source code modifications to improve code performance by
applying code optimization techniques on the intermediate code.
Intermediate Representation
Intermediate codes can be represented in a variety of ways and they have their own benefits.
High Level IR - High-level intermediate code representation is very close to the source language
itself. They can be easily generated from the source code and we can easily apply code
modifications to enhance performance. But for target machine optimization, it is less preferred.
Low Level IR - This one is close to the target machine, which makes it suitable for register and
memory allocation, instruction set selection, etc. It is good for machine-dependent optimizations.
Intermediate code can be either language specific (e.g., Byte Code for Java) or language independent
(three-address code).
Three-Address Code
Intermediate code generator receives input from its predecessor phase, semantic analyzer, in the form
of an annotated syntax tree. That syntax tree then can be converted into a linear representation, e.g.,
postfix notation. Intermediate code tends to be machine independent code. Therefore, code generator
assumes to have unlimited number of memory storage (register) to generate code.
For example:
a = b + c * d;
The intermediate code generator will try to divide this expression into sub-expressions and then
generate the corresponding code.
r1 = c * d;
r2 = b + r1;
a = r2
A three-address code has at most three address locations to calculate the expression. A three-address
code can be represented in two forms : quadruples and triples.
Quadruples
Each instruction in quadruples presentation is divided into four fields: operator, arg1, arg2, and result.
The above example is represented below in quadruples format:
* c d r1
+ b r1 r2
+ r2 r1 r3
= r3 a
Triples
Each instruction in triples presentation has three fields : op, arg1, and arg2.The results of respective
sub-expressions are denoted by the position of expression. Triples represent similarity with DAG and
syntax tree. They are equivalent to DAG while representing expressions.
Op arg1 arg2
* c d
+ b (0)
+ (1) (0)
= (2)
Triples face the problem of code immovability while optimization, as the results are positional and
changing the order or position of an expression may cause problems.
Indirect Triples
This representation is an enhancement over triples representation. It uses pointers instead of position to
store results. This enables the optimizers to freely re-position the sub-expression to produce an
optimized code.
Declarations
A variable or procedure has to be declared before it can be used. Declaration involves allocation of
space in memory and entry of type and name in the symbol table. A program may be coded and
designed keeping the target machine structure in mind, but it may not always be possible to accurately
convert a source code to its target language.
Taking the whole program as a collection of procedures and sub-procedures, it becomes possible to
declare all the names local to the procedure. Memory allocation is done in a consecutive manner and
names are allocated to memory in the sequence they are declared in the program. We use offset
variable and set it to zero {offset = 0} that denote the base address.
The source programming language and the target machine architecture may vary in the way names are
stored, so relative addressing is used. While the first name is allocated memory starting from the
memory location 0 {offset=0}, the next name declared later, should be allocated memory next to the first
one.
Example:
We take the example of C programming language where an integer variable is assigned 2 bytes of
memory and a float variable is assigned 4 bytes of memory.
int a;
float b;
Allocation process:
{offset = 0}
int a;
id.type = int
id.width = 2
{offset = 2}
float b;
id.type = float
id.width = 4
{offset = 6}
To enter this detail in a symbol table, a procedure enter can be used. This method may have the
following structure:
This procedure should create an entry in the symbol table, for variable name, having its type set to type
and relative address offset in its data area.
Lecture #34
Directed Acyclic Graph (DAG) is a tool that depicts the structure of basic blocks, helps to see the flow of
values flowing among the basic blocks, and offers optimization too. DAG provides easy transformation
on basic blocks. DAG can be understood here:
Interior nodes also represent the results of expressions or the identifiers/name where the values
are to be stored or assigned.
Example:
t0 = a + b
t1 = t0 + c
d = t0 + t1
[t0 = a + b]
[t1 = t0 + c]
[d = t0 + t1]
BOOLEAN EXPRESSIONS are constructed using boolean operators. We will consider here the following
rules.
E E E
E E E
E E
E E
Boolean expressions are used as conditions for statements changing the flow of control.
Evaluation of boolean expressions can be optimized if it is sufficient to evaluate a part of the
expression that determines its value.
When translating Boolean expressions into three-address code, we can use two different
methods.
Numerical method.
Assign numerical values to true and false and evaluate the expression analogously to an
arithmetic expression. This is convenient for boolean expressions which are not involved in flow
of control constructs.
Jump method.
Evaluate a boolean expression E as a sequence of conditional and unconditional jumps to
location E.true (if E is true) or to E.false. To be detailed shortly.
S
E S1
S
E S1 S2
S
E S1
Since several attributes are inherited and since each action above appears after its associated
production, this is not a translation scheme.
However it is an L-attributed definition.
Then its conversion into a translation scheme is obvious.
From now on, we may present a translation scheme as a syntax-directed definition if the latter is
an L-attributed definition.
The reason is to make large translation schemes easier to read.
TRANSLATION OF BOOLEAN EXPRESSIONS WITH THE JUMP METHOD. We will consider here the
following rules.
E E1 E2
E
E1 E2
E
E1
E (E1)
the attributes true and false exist for the entire expression
as labels Ltrue and Lfalse respectively.
Observations.
Of course, this is not optimal and looks funny since the expression to translate is a sentence of
the target language!
But this jump method allows us to translate more involved expressions (which are not part of the
target language) like those of the following examples.
Example 6 Now consider the expression
a < b or c < d
Again assume that the labels Ltrue and Lfalse have been set for the entire expression. Then the
translation is
Of course the generated code is not optimal! Indeed the second statement can be eliminated.
Backpatching
A key problem when generating code for boolean expressions and flow-of-control statements is that of
matching a jump instruction with the target of the jump. For example, the translation of the boolean
expression B in if ( B ) S contains a jump, for when B is false, to the instruction following the code for S. In
a one-pass translation, B must be translated before S is examined. Labels can be passed as inherited
attributes to where the relevant jump instructions were generated. But a separate pass is then needed to
bind labels to addresses.In backpatching, lists of jumps are passed as synthesized attributes. Specifically,
when a jump is generated, the target of the jump is temporarily left unspecified. Each such jump is put on
a list of jumps whose labels are to be filled in when the proper label can be determined. All of the jumps
on a list have the same target label.
Backpatching can be used to generate code for boolean expressions and flow-of-control statements in
one pass. Synthesized attributes truelist and falselist of nonterminal B are used to manage labels in
jumping code for boolean expressions. B. truelist will be a list of jump or conditional jump instructions into
which we must insert the label to which control goes if B is true. B.falselist likewise is the list of
instructions that eventually get the label to which control goes when B is false. As code is generated for
B, jumps to the true and false exits are left incomplete, with the label field unfilled. These incomplete
jumps are placed on lists pointed to by B.truelist and B.falselist, as appropriate. A statement S has a
synthesized attribute S.nextlist, denoting a list of jumps to the instruction immediately following the code
for S.
Instructions are generated into an instruction array, and labels will be indices into this array.
To manipulate lists of jumps, we use three functions: makelist(i), merge(p1,p2),backpatch(p,i)
• makelist(i) creates a new list containing only i, an index into the array of instructions; makelist returns a
pointer to the newly created list.
• merge(p1,p2) concatenates the lists pointed to by p1 and p2, and returns a pointer to the concatenated
list.
• backpatch(p,i) inserts i as the target label for each of the instructions on the list pointed to by p.
Backpatching for Boolean Expressions
Intermediate Code for Procedures
Assume that parameters are passed by value. Suppose that a is an array of integers, and that f is a
function from integers to integers. Then, the assignment n = f ( a [ i ] ) ;might translate into the following
three-address code:
1. t 1 = i * 4
2. t 2 = a [ t 1 ]
3. param t2
4. t3 = call f, 1
5. n = t 3
Procedure Activations
Activation Tree
• We can use a tree (called activation tree) to show the way control enters and leaves
activations.
• In an activation tree:
– Each node represents an activation of a procedure.
– The root represents the activation of the main program.
– The node a is a parent of the node b iff the control flows from a to b.
– The node a is left to to the node b iff the lifetime of a occurs before the lifetime of b.
Example:
Control Stack
• The flow of the control in a program corresponds to a depth-first traversal of the activation
tree that:
– starts at the root,
– visits a node before its children, and
– recursively visits children at each node an a left-to-right order.
• A stack (called control stack) can be used to keep track of live procedure activations.
– An activation record is pushed onto the control stack as the activation starts.
– That activation record is popped when that activation ends.
• When node n is at the top of the control stack, the stack contains the nodes along the path
from n to the root.
Variable Scopes
• The same variable name can be used in the different parts of the program.
• The scope rules of the language determine which declaration of a name applies when the
name appears in the program.
• An occurrence of a variable (a name) is:
– local: If that occurrence is in the same procedure in which that name is declared.
– non-local: Otherwise (ie. it is declared outside of that procedure) Example:
procedure p;
var b:real;
procedure p;
var a: integer; a is local
begin a := 1; b := 2; end; b is non-local
begin ... end;
Lecture #38
Storage
Organization
Activation Records
main
program main; procedure p; function
stack
p
q(a:integer):integer;
q(3)
begin a: 3
if (a=1) then q:=1; else q:=a+q(a-1); q(2)
end; a:2
begin q(3); end; q(1)
begin p; end; a:1
Creation of Activation Records
• Who deallocates?
– Callee de-allocates the part allocated by Callee.
– Caller de-allocates the part allocated by Caller.
Displays
• Each empty entry in the parsing table is filled with a pointer to a special error routine which will
take care that error case.
• These error routines may:
– change, insert, or delete input symbols.
– issue appropriate error messages
– pop items from the stack.
• We should be careful when we design these error routines, because we may put the parser into
an infinite loop.
Error Cases:
– No relation holds between the terminal on the top of stack and the next input symbol.
– A handle is found (reduction step), but there is no production with this handle as a right
side
Error Recovery:
– Each empty entry is filled with a pointer to an error routine.
– Decides the popped handle “looks like” which right hand side. And tries to recover from
that situation.
• An LR parser will detect an error when it consults the parsing action table and finds an
error entry. All empty entries in the action table are error entries.
• Errors are never detected by consulting the goto table.
• An LR parser will announce error as soon as there is no valid continuation for the scanned
portion of the input.
• A canonical LR parser (LR(1) parser) will never make even a single reduction before
announcing an error.
• The SLR and LALR parsers may make several reductions before announcing an error.
• But, all LR parsers (LR(1), LALR and SLR parsers) will never shift an erroneous input
symbol onto the stack.
• Scan down the stack until a state s with a goto on a particular nonterminal A is found. (Get
rid of everything from the stack before this state s).
• Discard zero or more input symbols until a symbol a is found that can legitimately follow
A.
– The symbol a is simply in FOLLOW (A), but this may not work for all situations.
• The parser stacks the nonterminal A and the state goto[s,A], and it resumes the normal
parsing.
• This nonterminal A is normally is a basic programming block (there can be more than one
choice for A).
– stmt, expr, block, ...
Phrase-Level Error Recovery in LR Parsing
• Each empty entry in the action table is marked with a specific error routine.
• An error routine reflects the error that the user most likely will make in that case.
• An error routine inserts the symbols into the stack or the input (or it deletes the symbols
from the stack and the input, or it can do both insertion and deletion).
– missing operand
– unbalanced right parenthesis
Lecture #41
CODE OPTIMIZATION
Optimizing Transformations
Example:
a = d * c;
.
.
.
d = b * c + x –y;
We can eliminate the second evaluation of b*c from this code if none of the intervening statements
has changed its value. The code can be rewritten as given below.
T1 = b * c;
a = T1;
.
.
.
d = T1 + x – y;
• We can improve the execution efficiency of a program by shifting execution time actions to
compile time.
• We can evaluate an expression by a single value (known as folding).
Example:
A = 2 * (22.0/7.0) * r
Here we can perform the computation 2 * (22.0/7.0) at compile time itself.
Example:
x = 12.4
.
.
y = x / 2.3
Variable Propagation
Example:
c = a * b; x = a;
.
.
.
d = x * b;
• If the value contained in a variable at that point is not used anywhere in the program subsequently,
the variable is said to be dead at that place.
• If an assignment is made to a dead variable, then that assignment is a dead assignment and it can
be safely removed from the program.
• A piece of code is said to be dead if it computes values that are never used anywhere in
the program.
• Dead Code can be eliminated safely.
• Variable propagation often leads to making assignment statement into dead code.
Example:
c = a * b;
x = a;
.
.
.
d = x * b + 4;
Variable propagation will lead to following changes. c = a * b;
x = a;
.
.
.
d = a * b + 4;
Code Motion
• We aim to improve the execution time of the program by reducing the evaluation frequency of
expressions.
• Evaluation of expressions is moved from one part of the program to another in such a way that it is
evaluated lesser frequently.
• Loops are usually executed several times.
• We can bring the loop-invariant statements out of the loop.
Example:
a = 200;
while (a > 0)
{
b = x + y;
if ( a%b == 0)
printf (“%d”, a);
}
The statement b = x + y is executed every time with the loop. But because it is loop- invariant, we
can bring it outside the loop. It will then be executed only once.
a = 200;
b = x + y;
while (a > 0)
{
if ( a%b == 0)
printf (“%d”, a);
}
Example:
i = 1;
while (i < 10)
{
.
.
.
y = i * 4;
.
.
t = 4;
while (t < 40)
{
.
.
.
y = t;
.
.
.
t = t + 4;
.
.
.
}
• Certain computations that look different to the compiler and are not identified as common sub-
expressions are actually same.
• An expression B op C will usually be treated as being different to C op B.
• However, for certain operations (like addition and multiplication), they will produce the same result.
• We can achieve further optimization by treating them as common sub-expressions for such
operations.
Lecture #42
Local Optimizations
• Target code generated statement by statement generally contains redundant instructions.
• We can improve the quality of such code by applying optimizing transformations locally by
examining a short sequence of code instructions and replacing them by faster or shorter sequence,
if possible.
• This technique is known as Peephole Optimization where the peephole is a small moving
window on the program.
• Many of the code optimization techniques can be carried out by a single portion of a
program known as Basic Block.
Basic Block
• A basic Block is defined as a sequence of consecutive statements with only one entry (at the
beginning) and one exit (at the end).
• When a Basic Block of a program is entered, all the statements are executed in sequence without
a halt or possibility of branch except at the end.
• In order to determine all the Basic Block in a program, we need to identify the leaders,
the first statement of each Basic Block.
• Any statement that satisfies the following conditions is a leader;
o The first statement is leader.
o Any statement which is the target of any goto (jump) is a leader.
Any statement that immediately follows a goto (jump) is a leader.
A basic block is defined as the portion of code from one leader to the statement up to but
including the next leader or the end of the program.
Flow Graph
• It is a directed graph that is used to portray basic block and their successor relationships.
• The nodes of a flow graph are the basic blocks.
• The basic block whose leader is the first statement is known as the initial block.
• There is a directed edge from block B1 to B2 if B2 could immediately follow B1 during
execution.
• To determine whether there should be directed edge from B1 to B2, following criteria is
applied:
o There is a jump from last statement of B1 to the first statement of B2, OR
o B2 immediately follows B1 in order of the program and B1 does not end in an
unconditional jump.
• B1 is known as the predecessor of B2 and B2 is a successor of B1.
Loops
• We need to identify all the loops in a flow graph to carry out many optimizations discussed
earlier.
• A loop is a collection of nodes that
o is strongly connected i.e. from any node in the loop to any other, there is a
path of length one or more wholly within the loop, and
o has a unique entry, a node in the loop such that the only way to reach a
node in the loop from a node outside the loop is to first go through the entry.
DAG Representation of a Basic Block
• Many optimizing transformations can be implemented using the DAG representation of a basic
block.
• DAG stands for Directed Acyclic Graph i.e. a graph with directed edges and no cycles.
• DAG is very much like a tree but differs in that it may contain shared nodes where shared nodes
indicate common sub-expressions.
• A DAG has following components;
o Leaves are labeled by unique identifiers, either variable names or constants.
o Interior nodes are labeled by an operator symbol.
o Nodes are optionally given an extra set of identifiers known as attached
identifiers.
DAG Construction
• We assume there are initially no nodes and NODE ( ) is undefined for all arguments.
• The 3-address statements has one of three cases:
(i) A = B op C
(ii) A = op B
(iii) A=B
• We shall do the following steps (1) through (3) for each 3-address statement of the basic block:
(1) If NODE (B) is undefined, create a leaf labeled B, and let NODE (B) be this node. In case
(i), if NODE (C) is undefined, create a leaf labeled C and let that leaf be NODE (C); (2) In case (i), determine if
there is a node labeled op whose left child is NODE (B) and
whose right child is NODE (C). (This is to catch common sub-expressions.) If not create such a node. In case
(ii), determine whether there is a node labeled op whose lone child is NODE (B). If not create such a node.
Let n be the node found or created in both cases. In case (iii), let n be NODE (B).
(3) Append A to the list of attached identifiers for the node n in (2). Delete A from the list of attached
identifiers for NODE (A). Finally, set NODE (A) to n.
Applications of DAG
• We automatically detect common sub-expressions while constructing DAG.
• It is also known as to which identifiers have there values used in side the block; they are exactly
those for which a leaf is created in Step (1).
• We can also determine which statements compute values which could be used outside the block;
they are exactly those statements S whose node n in step (2) still has NODE
(A) = n at the end of DAG construction, where A is the identifier assigned by statement S
i. e. A is still an attached identifier for n.
Global Data Flow Analysis
• Certain optimizations can be achieved by examining the entire program and not just a portion of the
program.
• User-defined chaining is one particular problem of this kind.
• Here we try to find out as to which definition of a variable is applicable in a statement using the
value of that variable.