Design For Accessibility: Getting it right from the start
Chapter3pptx__2021_12_23_22_52_54.pptx
1. Department of CE
Prof. Happy Chapla
Unit no : 3
Parsing
(01ce0601)
Parsing
CD:COMPILER DESIGN
2. Outline :
Role of parser
Parse tree
Classification of grammar
Derivation and Reduction
Ambiguous grammar
Top-down Bottom-up parsing
LR Parsers
Syntax directed definition
Construction of syntax tree
S-Attribute, L-Attribute definition
Department of CE
Unit no : 3
Parsing (01ce0601)
Prof. Happy Chapla
3. Role of
Parser
In our compiler model, the parser obtains a string of
tokens from the lexical analyzer and verifies that the
string of token names can be generated by the
grammar for the source language.
It reports any syntax errors in the program. It also
recovers from commonly occurring errors so that it
can continue processing its input.
4. Scanner –
Parser
Interaction
Parse Tree
For well-formed programs, the parser constructs a
parse tree and passes it to the rest of the compiler for
further processing.
5. Syntax Error
Handling
If a compiler had to process only correct programs,
its design and implementation would be greatly
simplified.
But programmers frequently write incorrect
programs, and a good compiler should assist the
programmer in identifying and locating errors.
We know that programs can contain errors at many
different levels. For example, errors can be
Lexical : Such a misspelling an identifier, keyword, or
operator
Syntactic : Such as arithmetic expression with
unbalanced parenthesis
Semantic : Such as an operator applied to
incompatible operand
Logical : Such as infinitely recursive call
6. Syntax Error
Handling
The error handler in a parser has simple-to-state
goals :
It should report the presence of errors clearly and
accurately.
It should recover from each error quickly enough to
be able to detect sub sequent errors.
It should not significantly slow down the processing
of correct programs.
7. Parse tree
Parse tree is graphical representation of symbol.
Symbol can be terminal as well as non-terminal.
The root of parse tree is start symbol of the string.
Parse tree follows the precedence of operators. The
deepest sub-tree traversed first. So, the operator in
the parent node has less precedence over the
operator in the sub-tree.
Example:-
9. Classification
of Grammar
Grammars are classified on the basis of production they use
(Chomsky, 1963).
Given below are class of grammar where each class has its own
characteristics and limitations.
1. Type-0 Grammar:- Recursively Enumerable Grammar
These grammars are known as phrase structure grammars.
Their productions are of the form,
α = β, where both α and β are terminal and non-terminal
symbols.
This type of grammar is not relevant to Specifications of
programming languages.
2. Type-1 Grammar:- Context Sensitive Grammar
These Grammars have rules of the form αAβ → αΥβ with A
nonterminal and α, β, Υ strings of terminal and nonterminal
symbols. The string α and β may be empty but Υ must not be
nonempty.
Eg:- AB->CDB
Ab->Cdb
A->b
10. Classification
of Grammar
3. Type-2 Grammar:- Context Free Grammar
These are defined by the rules of the form A → Υ, with A
nonterminal and Υ a sting of terminal and nonterminal
Symbols. These grammar can be applied independent of its
context so it is Context free Grammar (CFG). CFGs are ideally
suited for programming language specification.
Eg:- A → aBc
4. Type-3 Grammar:- Regular Grammar
It restrict its rule of single nonterminal on the left hand side
and a right-hand side consisting of a single terminal, possibly
followed by a single nonterminal. The rule S → ϵ is also
allowed if S does not appear on the right side of any rule.
Eg:- A → ϵ
A → a
A → aB
11. Derivation
Let production P1 of grammar G be of the form
P1 : A::= α
and let β be a string such that β = γAθ, then
replacement of A by α in string β constitutes a
derivation according to production P1.
• Example
<Sentence> ::= <Noun Phrase><Verb Phrase>
<Noun Phrase> ::= <Article> <Noun>
<Verb Phrase> ::= <Verb><Noun Phrase>
<Article> ::= a | an | the
<Noun> ::= boy | apple
<Verb> ::= ate
12. Derivation
The following strings are sentential form.
<Sentence>
<Noun Phrase> <Verb Phrase>
the boy <Verb Phrase>
the boy <verb> <Noun Phrase>
the boy ate <Noun Phrase>
the boy ate an apple
String : id + id * id
E-->E+E|E*E|id
E-->E+E
E-->id+E
E-->id+E*E
E-->id+id*E
13. Derivation
The process of deriving string is called Derivation
and graphical representation of derivation is called
derivation tree or parse tree.
Derivation is a sequence of a production rules, to get
the input string.
During parsing we take two decisions:
1) Deciding the non terminal which is to be replaced.
2) Deciding the production rule by which non
terminal will be replaced.
For this we are having:
1) Left most derivation
2) Right most derivation
14. Left
Derivation
A derivation of a string S in a grammar G is a left most
derivation if at every step the left most non terminal is
replaced.
Example:
Production:
S S + S
S S * S
S id
String:- id+id*id
S S * S
SS + S * S
S id + S * S
S id + id * S
S id + id * id
15. Right
Derivation
A derivation of a string S in a grammar G is a left most
derivation if at every step the Right most non terminal
is replaced.
Example:
Production:
S S + S
S S * S
S id
String:- id+id*id
S S + S
SS + S * S
S S + S * id
S S + id * id
S id + id * id
16. Reduction
Let production P1 of grammar G be of the form
P1 : A::= α
and let σ be a string such that σ = γ α θ, then replacement of
α by A in string σ constitutes a reduction according to
production P1.
Step String
0 the boy ate an apple
1 <Article> boy ate an apple
2 <Article> <Noun> ate an apple
3 <Article> <Noun> <Verb> an apple
4 <Article> <Noun> <Verb> <Article> apple
5 <Article> <Noun> <Verb> <Article> <Noun>
6 <Noun Phrase> <Verb> <Article> <Noun>
7 <Noun Phrase> <Verb> <Noun Phrase>
8 <Noun Phrase> <Verb Phrase>
9 <Sentence>
17. Ambiguous
Grammar
• It implies the possibility of different interpretation of a
source string.
• Existence of ambiguity at the level of the syntactic
structure of a string would mean that more than one parse
tree can be built for the string. So string can have more
than one meaning associated with it.
• A grammar that produces more than one parse tree for
some sentence is said to be ambiguous.
Ambiguous Grammar
E Id| E + E | E * E
Id a | b | c
Both tree have same
string : a + b * c
a *
b c
+ c
a b
+ *
a + b * c a + b * c
18. Ambiguous
Grammar
E E + E | E * E | id
By parse tree:-
E
E + E
id E * E
id id
E
E * E
E + E id
id id
Parse tree-1
Parse tree-2
19. Ambiguous
Grammar
Example:-
Prove that given grammar is ambiguous grammar:
E a | Ea | bEE | EEb | EbE
Ans:-
Assume string baaab
E bEE
baE
baEEb
baaEb
baaab
OR
E EEb
bEEEb
baEEb
baaEb
baaab
Left derivation-1
Left derivation-2
20. Left
Recursion
In leftmost derivation by scanning the input from left
to right, grammars of the form A A x may cause
endless recursion.
Such grammars are called left-recursive and they
must be transformed if we want to use a top-down
parser.
Example:
E Ea | E+b | c
21. Algorithm
Assign an ordering from A1,…….An to the non terminal of
the grammar;
For i = 1 to n do
begin
for j=1 to i-1 do
begin
replace each production of the form Ai AiΥ
by the productions Ai δ1Υ | δ2Υ |…… |δkΥ
where Aj δ1 | δ2 |……….. | δk are all current
Aj production.
end
eliminate the intermediate left recursion
among Ai productions.
end
22. Left
Recursion
• There are three types of left recursion:
direct (A A x)
indirect (A B C, B A )
hidden (A B A, B )
To eliminate direct left recursion replace
A A1 | A2 | ... | Am | 1 | 2 | ... | n
with
A 1 A’ | 2 A’ | ... | n A’
A’ 1 A’ | 2 A’ | ... | m A’ |
23. Example
1. E E + T | T
T T * F | F
F (E) | id
Ans.
A A|
Replace with,
A A’
A’ A’ |
E TE’
E’ +TE’ |
T FT’
T’ *FT’ |
F (E) | id
24. Example
1. A Aad | Afg | b
Ans:-
Remove left recursion
A bA’
A’ adA’ | fgA’ |
2. A Acd | Ab | jk
B Bh | n
Ans :-
Remove left recursion
A jkA’
A’ cdA’ | bA’ |
B nB’
B’ hB’ |
25. Example
3. E Aa | b
A Ac | Ed |
Ans:-
Replace E,
E Aa | b
A Ac | Aad | bd |
Remove left recursion
E Aa | b
A bdA’ | A’
A cA’ | adA’ |
26. Left
Factoring
Left factoring is a grammar transformation that is useful
for producing a grammar suitable for predictive parsing.
Consider,
S if E then S else S | if E then S
Which of the two productions should we use to
expand non-terminal S when the next token is if?
We can solve this problem by factoring out the
common part in these rules. This way, we are
postponing the decision about which rule to choose
until we have more information (namely, whether
there is an else or not).
This is called left factoring
27. Algorithm
For each non terminal A find the longest prefix α common to
two or more of its alternative.
If α!=E, i.e, there is a non trivial common prefix, replace all the
A productions A αβ1 | αβ2 | ….. | αβn | Υ ,
where Υ represents all the alternative which do not
starts with α by,
A αA’ | Υ
A’ β1 | β2 |…… | βn
Here, A’ is new non terminal, repeatedly apply this
transformation until no two alternatives for a non-terminal
have a common prefix.
29. Example
E -> T+E | T
T -> V*T | V
V-> id
Ans.
E TE’
E’ +E |
T VT’
T’ *T |
V id
30. Example
1. S cdLk | cdk | cd
L mn |
Ans.
S cdS’
S’ Lk | k |
L mn |
2. E iEtE | iEtEeE | a
Ab
Ans.
E iEtEE’ | a
E’ | eE
A’ b
31. Compute
First
Example :
A bcg | gh
FIRST(A) = {b, g}
A Bcd | gh
B m |
FIRST(A) = {m, c , g}
FIRST(B) = {m , }
32. Compute
First
Example :
A BCD | Cx
B b |
C c |
D d |
FIRST(A) = {b, c, d, x, }
FIRST(B) = {b, }
FIRST(C) = {c , }
FIRST(D) = {d , }
37. Top down
parser
Top-down parser is the parser which generates parse tree
for the given input string with the help of grammar
productions by expanding the non-terminals i.e. it starts
from the start symbol and ends on the terminals.
It works as following:
Start with the root or start symbol.
Grow the tree downwards by expanding productions at
the lower levels of the tree.
Select a nonterminal and extend it by adding children
corresponding to the right side of some production for the
nonterminal.
Repeat till, Lower fringe consists only terminals and the
input is consumed Top-down parsing basically finds a
leftmost derivation for an input string.
45. Recursive
Descent
Parser
A Recursive Descent Parser is a variant of top down
parsing.
It may involve backtracking.
Top down parsing can be viewed as an attempt to
find a left most derivation for an input string.
Equivalently, it can be viewed as an attempt to
construct a parse tree for the input starting from the
root and creating the nodes of the parse tree in
preorder.
Silent advantages of RDP are its simplicity and
generality.
49. Recursive
Descent
Parser
Advantages:
• They are exceptionally simple
• They can be constructed from recognizers simply by doing
some extra work specifically, building a parse tree
Disadvantages:
• Not very fast.
• It is difficult to provide really good error messages
• They cannot do parses that require arbitrarily long
lookaheads
50. Predictive
Parser
Special case of Recursive descent parser is called
Predictive parser.
In Many cases, by carefully writing a grammar,
eliminating left recursion from it, and left factoring
the resulting grammar, we can obtain a grammar that
can be parsed by a recursive descent parser that
needs no backtracking i.e. Predictive parser.
stmt if expr then stmt else stmt
| while expr do stmt
| begin stmt_list end
The keywords if, while and begin tell us which
alternative is the only one that could possibly
succeed if we are to find a statement.
52. Non
Recursive
Predictive
parsing
The parser works as follow:
1) If X = a = $, the parser halts and announces successful
completion of parsing
2) If X = a ≠ $ , the parser pops X off the stack and
advance input pointer to the next input symbol.
3) If X is Non terminal, the program consults entry
M[X,a] of parsing Table M. This entry will be either X-
production of grammar or an error entry.
For eg : M [X,a] = { X U V W },
The parser replace X on top of stack by WVU (With U on
top).
If M[X,a] = Error
The Parser Calls an error recovery routine.
53. LL(1) Parser
or
Predictive
parsing
An LL(1) parser is a table driven parser.
L stands for left-to-right scanning, L stands for Left
most derivation and ‘1’ in LL(1) indicates that the
grammar uses a look-ahead of one source symbol –
that is, the prediction to be made is determined by
the next source symbol.
A major advantage of LL(1) parsing is its amenability
to automatic construction by a parser generator.
The data structure used by this parser are input
buffer, stack and parsing table.
54. LL(1) Parser
or
Predictive
parsing
Steps to construct LL(1)
I. Remove left recursion.
II. Compute first and follow of non terminal.
III. Construct predictive parsing table.
IV. Parse the input string with the help of parsing
table.
55. LL(1) Parser
Example:
E E + T | T
T V * T | V
V id | (E)
Step 1:- Remove Left Recursion
E TE’
E’ +TE’ |
T VT’
T’ *VT’ |
V id | (E)
56. LL(1) Parser
Example:
E E + T | T
T T * V | V
V (E) | id
Step 1:- Remove Left Recursion
E TE’
E’ +TE’ |
T VT’
T’ *VT’ |
V (E) | id
Step 2:- FIRST and FOLLOW
FIRST FOLLOW
E {(, id} {$,)}
E’ {+, } {$,)}
T {(, id} { +, $, )}
T’ {*, } {+, $,)}
V {(, id} {*, +, $, )}
57. LL(1) Parser
Example:
E E + T | T
T T* V | V
V (E) | id
Step 3: Predictive parsing table
id * + ( ) $
E
E’
T
T’
V
id * + ( ) $
E ETE’ ETE’
E’ E’+TE’ E’ E’
T TVT’ TVT’
T’ T’ *VT’ T’ T’ T’
V Vid V(E)
63. Handle and
Handle
Pruning
Handle:- A handle of a string is a substring of the
string that matches the right side of a production, and
we can reduce such string by a non terminal on left
hand side production.
Handle Pruning:- The process of discovering a handle
and reducing it to appropriate left hand side non
terminal is known as handle pruning.
Right sentential form Handle Reducing
Production
Id1+id2*id3 Id1 Eid
E+id2*id3 Id2 Eid
E+E*id Id3 Eid
E+E*E E*E EE*E
E+E E+E EE+E
E
E -> E + E
E -> E * E
E -> id
id + id == >string
|
v
E + id (id is handle)
E + E (id is handle)
E (E+E is handle)
64. Shift Reduce
Parsing
• Attempts to construct a parse tree for an input string
beginning at the leaves (bottom) and working up towards
the root (top).
• “Reducing” a string w to the start symbol of a grammar.
• At each step, decide on some substring that matches the
RHS of some production.
• Replace this string by the LHS (called reduction).
65. Shift Reduce
Parsing
It has following operations:
1. Shift:- Moving of the symbols from input buffer
onto the stack, this action is called shift.
2. Reduce:- If the handle appears on the top of the
stack then reduction of it by appropriate rule is
done. That means R.H.S of the rule is popped of and
L.H.S is pushed in. This action is called Reduce
action.
3. Accept:- If the stack contains start symbol only and
input buffer is empty at the same time then that
action is called accept.
4. Error:- A situation in which parser cannot either
shift or reduce the symbol, it cannot even perform
the accept action is called as error.
69. Operator
Precedence
Parsing
• In an operator grammar, no production rule can have:
• at the right side
• two adjacent non-terminals at the right side.
Example:-
EAB EEOE E E + E
Aa Eid E E - E
Bb O+ | - Eid
Not Not Operator
Operator Operator Precedence
Precedence Precedence
70. Operator
Precedence
Parsing
In Operator Precedence Parsing , we define following relations.
Relation Meaning
a < ∙ b a “yields precedence to” b
a = b a “has the same precedence as” b
a ∙ > b a “takes precedence over” b
71. Operator
Precedence
Parsing
1. Set i pointer to first symbol of string w. The string will be represented as
follows
2. If $ is on the top of the stack and if a is the symbol pointed by i then return.
3. If a is on the top of the stack and if the symbol b is read by pointer i then
a) if a <. b or a = b then
push b on to the stack
advance the pointer i to next input symbol
b) Else if a .> b then
While (top symbol of the stack .> recently popped terminal symbol)
{
Pop the stack. Here popping the symbol means reducing the
terminal symbol by equivalent non terminal.
}
c) Else error()
$
74. Operator
Precedence
Parsing
Handle can be found by following process :
1) Scan the string from left end until the 1st greater than is
encountered.
2) Then scan backwards (to the left) over any equals to (=) until less
than is encountered.
3) The handle contains everything to the left of 1st > and to the right of
< encountered in step 2. Including any surrounding non terminal.
$ < id > + < id > * < id > $
75. Precedence
function
id + * $
id .> .> .>
+ <. .> <. .>
* <. .> .> .>
$ <. <. <.
g
f
b
a
If a .> b, then fa gb
If a <. b, then fa gb
76. Precedence
function
gid fid
f* g*
g+ f+
f$ g$
gid .> f* gid f*
gid .> f+ gid f+
gid .> f+ gid f$
fid .> g+ fid g+
fid .> g* fid g*
fid .> g$ fid g $
*Same way consider
for all
id + * $
id .> .> .>
+ <. .> <. .>
* <. .> .> .>
$ <. <. <.
77. Precedence
function
If the constructed graph has an cycle then no precedence function exist.
When there are no cycles collect the length of the longest paths from the
group of fa and gb respectively.
Now,
+ * id $
f
g
f 2 4 4 0
g 1 3 5 0
79. Operator
Precedence
Parsing
• Disadvantages:
• Small class of grammars.
• Difficult to decide which language is recognized by the grammar.
• Advantages:
• simple
• powerful enough for expressions in programming languages
80. LR Parsers
• This is the most efficient method of the bottom-up parsing
which can be used to parse the large class of context free
grammars. This method is also called LR parser.
• Non backtracking shift reduce technique.
• L stands for Left to right scanning
• R stands for rightmost derivation in reverse.
82. LR Parsers
The structure of LR parser consists of input buffer for
storing the input string, a stack for storing the grammar
symbols, output and a parsing table comprised of two parts,
namely actions and goto.
There is one parsing program which is actually driving
program and reads the input symbol out at a time from the
input buffer.
The driving program works on following line.
1. It initializes the stack with start symbol and invokes
scanner (lexical analyzer) to get next token.
2. It determines sj the state currently on the top of the
stack and ai the current input symbol.
3. It consults the parsing table for the action {sj, ai} which
can have one of the values.
83. LR Parsers
i. si means shift state i.
ii. rj means reduce by rule j.
iii. Accept means Successful parsing is done.
iv. Error indicates syntactical error.
84. Definitions
LR(0):- The LR(0) item for grammar G is production rule in which
symbol • is inserted at some position in RHS of the rule.
S • ABC , S A •BC , S ABC •
Augmented Grammar:- If a grammar G is having start symbol S
then augmented grammar is a new grammar G’ in which S’ is a
new start symbol such that S’S. The purpose of this grammar is
to indicate the acceptance of input.
Kernel items:- It is collection of items S’ •S and all the items
whose dots are not at the leftmost end of RHS of the rule.
Non kernel items:- The collection of all the items in which • are
at the left end of RHS of the rule.
Functions closure and goto:- These are two important functions
required to create collection of canonical set of items.
Viable Prefix:- It is the set of prefixs in the right sentential form
of production Aα. This set can appear on the stack during
shift/reduce action.
85. LR(0) Parser
Example:-
S AA
A aA | b
1st Step:-
S’ S
S AA
A aA
A b
2nd Step:-
I0 = S’ • S
S •AA
A •aA
A •b
I1 = goto (I0 , S)
S’ S•
I3 = goto (I0, a)
A a • A
A •a A
A •b
I4 = goto (I0, b)
A b •
I5 = goto (I2, A)
S AA •
I6 = goto (I3, A)
A aA•
=goto (I2, a)
same as I3
=goto (I2, b)
same as I4
= goto (I3, a)
Same as I3
= goto (I3, b)
Same as I4
I2 = goto (I0, A)
S A •A
A •aA
A •b
S -> oS
goto (I0,S)
86. LR(0) Parser
a b $ S A
Action Goto
0 S3 S4 1 2
1 Accept
2 S3 S4 5
3 S3 S4 6
4 r3 r3 r3
5 r1 r1 r1
6 r2 r2 r2
88. LR(0)
Disadvantage
• In LR(0) the reduction string is in entire row. Therefore, we
have to reduce by taking the decision looking at grammar.
So, it is not powerful and accurate.
89. SLR
SLR means simple LR.
A grammar for which an SLR parser can be constructed is said
to be an SLR.
SLR is a type of LR parser with small parse tables and a
relatively simple parser generator algorithm.
The parsing table has two states(action, goto)
The parsing table has values:
1. Shift S, where S is a state
2. Reduce by a grammar production
3. Accept and
4. Error
90. SLR(1)
Parser
Example:-
S CC
C cC | d
1st Step:-
S’ S
S CC
C cC
C d
2nd Step:-
I0 = S’ • S
S •CC
C •cC
C •d
I1 = goto (I0 , S)
S’ S•
I3 = goto (I0, c)
C c • C
C •c C
C •d
I4 = goto (I0, d)
C d•
I5 = goto (I2, C)
S CC •
I6 = goto (I3, C)
A cC•
=goto (I2, c)
same as I3
=goto (I2, d)
same as I4
= goto (I3, c)
Same as I3
= goto (I3, d)
Same as I4
I2 = goto (I0, C)
S C •C
C •cC
C •d
S' -> o S
S -> o C C
91. SLR(1)
Parser
c d $ S C
Action Goto
0 S3 S4 1 2
1 Accept
2 S3 S4 5
3 S3 S4 6
4 r3 r3 r3
5 r1
6 r2 r2 r2
Follow:
S’ = {$}
S = {$}
C = {c,d,$}
0 S’ S
1 S CC
2 C cC
3 C d
93. SLR(1)
Parser
Check whether given grammar is SLR or not.
Example 1:
E E + T | T
T T * F | F
F (E) | id
String : id+id*id
Example 2:
S AaAb
S BbBa
A ε
B ε
Example 3 :
S L=R
S R
L *R
L id
R L
94. SLR(1)
Parser
Example:-
E E + T | T
T T * F | F
F (E) | id
1st Step:-
E’ E
E E + T | T
T T * F | F
F (E) | id
2nd Step:-
I0 = E’ •E
E •E+T | •T
T •T*F | •F
F •(E) | •id
I2 = goto (I0, T)
E T•
T T•*F
I3 = goto (I0, F)
T F•
I4 = goto (I0, ( )
F (•E)
E •E+T | •T
T •T*F | •F
F •(E) | •id
I7 = goto (I2, *)
T T*•F
F •(E) | •id
I6=goto (I1, +)
E E+•T
T •T*F | •F
F •(E) | •id
I8 = goto (I4, E)
F (E•)
E E•+T
= goto (I4, T)
Same as I2
I1 = goto (I0, E)
E’ E•
E E•+T
I5 = goto (I0, id)
F id•
= goto (I4, F)
Same as I3
= goto (I4, ( )
Same as I4
95. SLR(1)
Parser
I4 = goto (I0, ( )
F (•E)
E •E+T | •T
T •T*F | •F
F •(E) | •id
I11 = goto (I8, ) )
F (E)•
= goto (I7, id)
Same as I5
= goto (I4, id)
Same as I5
I10 = goto (I7, F)
T T*F•
= goto (I8, +)
Same as I6
= goto (I9, * )
Same as I7
I9 = goto (I6, T)
E E+T•
T T•*F
I6=goto (I1, +)
E E+•T
T •T*F | •F
F •(E) | •id
= goto (I6, F)
Same as I3
= goto (I6, ( )
Same as I4
= goto (I6, id )
Same as I5
I7 = goto (I2, *)
T T*•F
F •(E) | •id
I8 = goto (I4, E)
F (E•)
E E•+T
= goto (I7, ( )
Same as I4
FOLLOW (E’) = {$}
FOLLOW (E) = { +,),$ }
FOLLOW (T) = { +,*,),$ }
FOLLOW (F) = { +,*,),$ }
97. SLR(1)
Parser
Example :-
S L = R | R
L * R | id
R L
1st Step:-
S’ S
S L = R | R
L * R | id
R L
2nd Step:-
I0 = S’ •S
S • L = R | • R
L • * R | • id
R • L
I2 = goto (I0, L)
S L • = R
R L•
I3 = goto (I0, R)
S R•
I4 = goto (I0, * )
L * • R
R • L
L • * R | • id
I7 = goto (I4, R)
L * R•
I6=goto (I2, =)
S L = • R
R • L
L • * R | • id
I8 = goto (I4, L)
R L •
= goto (I4, *)
Same as I4
I1 = goto (I0, S)
S’ S•
I5 = goto (I0, id)
L id•
= goto (I4, id)
Same as I5
I9= goto (I6, R )
S L = R•
= goto (I6, L)
Same as I8
= goto (I6, *)
Same as I4
= goto (I6, id)
Same as I5
98. SLR(1)
Parser
Example :-
S AaAb
S BbBa
A ε
B ε
1st Step:-
S’ S
S AaAb
S BbBa
A ε
B ε
2nd Step:-
I0 = S’ •S
S • AaAb | • BbBa
A •
B •
I2 = goto (I0, A)
S A • a A b
I3 = goto (I0, B)
S B • b B a
I4 = goto (I2, a )
S A a • A b
A •
I7 = goto (I5, B)
S B b B • a
I8 = goto (I6, b)
S A a A b •
I1 = goto (I0, S)
S’ S•
I5 = goto (I3, b)
S B b • B a
B •
I9 = goto (I7, a)
S B b B a •
I6= goto (I4, A)
S A a A • b
FOLLOW (S’) = {$}
FOLLOW (S) = {$}
FOLLOW (A) = {a,b}
FOLLOW (B) = {a,b}
99. SLR(1)
Parser
a b $ S A B
0 R3/R4 R3/R4 1 2 3
1 Accept
2 S4
3 S5
4 R3 R3 6
5 R4 R4 7
6 S8
7 S9
8 R1
9 R2
ACTION GOTO
100. CLR(1)
Canonical LR(1) parser.
CLR(1) Parsing configurations have the general form:
The Look Ahead Component ‘a’ represents a possible look-
ahead after the entire right-hand side has been matched.
CLR parser is the most powerful parser.
LR(1) item = LR(0) item + lookahead
LR(0) Item …. A α • β
LR(1) Item …. A α • β , a
In LR(1) “a” is lookahead symbol
A α • Bβ , a FIRST(β , a)
101. CLR(1)
Parser
Example:-
S CC
C aC | d
1st Step:-
S’ S
S CC
C aC
C d
2nd Step:-
I0 = S’ • S, $
S •CC, $
C •aC , a/d
C •d, a/d
I1 = goto (I0 , S)
S’ S•,$
I3 = goto (I0, a)
C a • C, a/d
C •a C, a/d
C •d, a/d
I4 = goto (I0, d)
C d•, a/d
I5 = goto (I2, C)
S CC •, $
I8 = goto (I3, C)
C aC•, a/d
I7=goto(I2, d)
C d•, $
goto (I3, a)
= same as I3
goto (I6, a)
= Same as I6
goto (I6, d)
= Same as I7
I2 = goto (I0, C)
S C •C, $
C •aC, $
C •d, $
I6 = goto (I2,a )
C a•C, $
C •aC,$
C •d,$
I9 = goto (I6, C)
C aC•, $
goto (I3, d)
= same as I4
102. CLR(1)
Parser
a d $ S C
Action Goto
0 S3 S4 1 2
1 Accept
2 S6 S7 5
3 S3 S4 8
4 r3 r3
5 r1
6 S6 S7 9
Reduce entry only in the lookahead.
7 r3
8 r2 r2
9 r2
105. LALR(1)
In this type of grammar the lookahead symbol is generated for each
set of item. The table obtained by this method are smaller compared
to clr(1) parser.
In fact the state of LALR(Look Ahead LR) and SLR are always same.
We follow the same steps as discussed in SLR and canonical LR
parsing techniques and those are:
1. Construction of canonical set of items along with look ahead.
2. Building LALR parsing table.
3. Parsing the input string using canonical LR parsing table.
NOTE :
No. of Item State in SLR = No. of Item State in LALR < No. of state in CLR
106. LALR(1)
Parser
Example:-
S CC
C aC | d
1st Step:-
S’ S
S CC
C aC
C d
2nd Step:-
I0 = S’ • S, $
S •CC, $
C •aC , a/d
C •d, a/d
I1 = goto (I0 , S)
S’ S•,$
I3 = goto (I0, a)
C a • C, a/d
C •a C, a/d
C •d, a/d
I4 = goto (I0, d)
C d•, a/d
I5 = goto (I2, C)
S CC •, $
I8 = goto (I3, C)
C aC•, a/d
I7=goto(I2, d)
C d•, $
goto (I3, a)
= same as I3
goto (I6, a)
= Same as I6
goto (I6, d)
= Same as I7
I2 = goto (I0, C)
S C •C, $
C •aC, $
C •d, $
I6 = goto (I2,a )
C a•C, $
C •aC,$
C •d,$
I9 = goto (I6, C)
C aC•, $
goto (I3, d)
= same as I4
107. CLR(1)
Parser
a d $ S C
Action Goto
0 S3 S4 1 2
1 Accept
2 S6 S7 5
3 S3 S4 8
4 r3 r3
5 r1
6 S6 S7 9
Reduce entry only in the lookahead.
7 r3
8 r2 r2
9 r2
108. LALR(1)
Parser
a d $ S C
Action Goto
0 S36 S47 1 2
1 Accept
2 S36 S47 5
36 S36 S47 89
47 r3 R3 R3
5 r1
Reduce entry only in the lookahead.
89 r2 r2 R2
111. Syntax
Directed
Definition
A syntax directed definition specifies the values of
attributes by associating semantic rules with the
grammar productions.
Grammar + Semantic Rules
A syntax-directed translation is an executable specification
of SDD.
Syntax directed definition is a generalization of context
free grammar in which each grammar symbol has an
associated set of attributes.
Types of attributes are:
1. Synthesized attribute
2. Inherited attribute
112. Syntax
Directed
Definition
E --> E+T { E.val = E.val + T.val } ------------
| T { E.val = T.val }
T --> T*F {T.val = T.val * F.val }
| F { T.val = F.val }
F --> num {F.val = num.lexval}
113. Synthesized
Attribute
Value of synthesized attribute at a node can be
computed from the value of attributes at the children of
that node in the parse tree.
Syntax directed definition that uses synthesized attribute
exclusively is said to be S-attribute definition.
A parse tree for an S-attribute definition can always be
annotated by evaluating the semantic rules for the
attribute at each node bottom up, from the leaves to root.
An Annotated parse tree is a parse tree showing the
value of the attributes at each node.
The process of computing the attribute values at the node
is called Annotating or Decorating the parse tree.
114. Synthesized
Attribute
Example: Simple desk calculator
Production Semantic rules
L En Print (E.val)
E E1+T E.val = E1.val + T.val
E T E.val = T.val
T T1 * F T.val=T1.val*F.val
T F T.val=F.val
F (E) F.val=E.val
F digit F.val=digit.lexval
115. Synthesized
Attribute
Example: Simple desk calculator
String: 3*5+4n; Production Semantic
rules
L En Print (E.val)
E E1+T E.val = E1.val +
T.val
E T E.val = T.val
T T1 * F T.val=T1.val*F.
val
T F T.val=F.val
F (E) F.val=E.val
F digit F.val=digit.lexv
al
L
E.val=19
n
+
T.val=15
E.val=15
*
F.val=3
T.val=3
digit.lexval=3
F.val=4
T.val=4
digit.lexval=4
F.val=5
digit.lexval=5
Annotated parse tree for 3*5+4n
116. Inherited
Attribute
An inherited attribute at a node in a parse tree is defined
in terms of attributes at the parent and/or siblings of the
node.
Convenient way of expressing the dependency of a
programming language construct on the context in which
it appears.
We can use inherited attributes to keep track of whether
an identifier appears on the left or right side of an
assignment to decide whether the address or value of the
assignment is needed.
The inherited attribute distributes type information to the
various identifiers in a declaration.
117. Inherited
Attribute
Production Semantic rules
DTL L.in = T.type
T int T.Type = integer
T real T.type= real
L L1,id L1.in = L.in, addtype(id.entry,L.in)
L id addtype(id.entry,L.in)
Symbol T is associated with a synthesized attribute type.
Symbol L is associated with an inherited attribute in.
118. Inherited
Attribute
Producti
on
Semantic rules
DTL L.in = T.type
T int T.Type = integer
T real T.type= real
L L1,id L1.in = L.in,
addtype(id.entry,L.in)
L id addtype(id.entry,L.in)
D
T.type=real L
real
T L.in=real
,
L1
L.in=real id
,
L1
L.in=real id
id
real id1,id2,id3
id3
id2
id1
DTL
L → L1 , id
L → id
119. Difference
Synthesized attribute Inherited attribute
Value of synthesized attribute at
a node can be computed from the
value of attributes at the children
of that node in parse tree.
Values of inherited attribute at a
node can be computed from the
value of attribute at the parent
and/ or siblings of the node
Pass the information from
bottom to top in the parse tree
Pass the information top to
bottom in the parse tree or from
left siblings to the right siblings.
Synthesized attribute is used by
both S-attributed SDT and L-
attributed STD.
Inherited attribute is used by
only L-attributed SDT.
120. Dependency
graph
The directed graph that represents the interdependencies
between synthesized and inherited attribute at nodes in
the parse tree is called dependency graph.
For the rule XYZ the semantic action is given by X.x=f(Y.y,
Z.z) then synthesized attribute X.x depends on attributes
Y.y and Z.z.
The synthesized attribute can be represented by .val.
Hence the synthesized attribute are given by E.val, E1.val
and E2.val.
The dependency among the nodes are given by solid arrow.
The arrow from E1 and E2 show that value of E depends
on E1 and E2.
121. Dependency
graph
Algorithm
For each node n in the parse tree do
For each attribute (a) of the grammar symbol at n do
Construct a node in the dependency graph for (a);
For each node n in the parse tree do
for each semantic rule b=f(c1,c2,…ck)
associated with the production used at n do
for i=1 to k do
construct an edge from the node for Ci to the
node for b;
122. Dependency
graph
Algorithm
E E1 + E2
E E1 * E2
Production Semantic rule
E E1 + E2 E.val=E1.val+E2.val
E E1 * E2 E.val=E1.val*E2.val
E.val
E.val + E.val
E.val * E.val
123. Dependency
Graph
Producti
on
Semantic rules
DTL L.in = T.type
T int T.Type = integer
T real T.type= real
L L1,id L1.in = L.in,
addtype(id.entry,L.in)
L id addtype(id.entry,L.in)
D
T.type=real L
real
T L.in=real
,
L1
L.in=real id
,
L1
L.in=real id
id
real id1,id2,id3
id3
id2
id1
DTL
L → L1 , id
L → id
124. Evaluation
Order
A topological sort of a directed acyclic graph is any ordering
m1,m2,…,mk of the nodes of the graph such that edges go from
nodes earlier in the ordering to later nodes.
That means if mi mj is an edge from mi to mj, then mi appears
before mj in the ordering.
125. S-Attributed
definitions
An SDD is S-attributed if every attribute is synthesized
S-Attributed definitions can be implemented during bottom-up
parsing without the need to explicitly create parse trees.
Semantic actions are placed in rightmost place of RHS.
126. L-Attributed
definitions
A SDD is L-Attributed if the edges in dependency graph goes
from Left to Right but not from Right to Left.
More precisely, each attribute must be either
Synthesized
Inherited, but if there us a production A X1X2…Xn and
there is an inherited attribute Xi.a computed by a rule
associated with this production, then the rule may only use:
Inherited attributes associated with the head A
Either inherited or synthesized attributes associated
with the occurrences of symbols X1,X2,…,Xi-1 located to
the left of Xi
Inherited or synthesized attributes associated with this
occurrence of Xi itself, but in such a way that there is no
cycle in the graph
128. Construction
of Syntax tree
We use the following function to create the nodes of the syntax
tree for expression with binary operator. Each function returns a
pointer to a newly created node.
mknode(op, left, right) creates an operator node with label op
and two fields containing pointers to left and right.
mkleaf(id, entry) creates an identifier node with label id and a
field containing entry, a pointer to the symbol table entry for the
identifier.
mkleaf(num, val) creates a number node with label num and a
field containing val, the value of the number.
Example: Construct syntax tree for a-4+c
P1: mkleaf(id, entry for a);
P2: mkleaf(num, 4);
P3: mknode(‘-’, P1,P2);
P4: mkleaf(id, entry for c);
P5: mknode(‘+’,P3,P4);
130. Syntax
directed
translation
Attributes are used to evaluate the expression along the process
of the parsing.
During the process of parsing, the evaluation of attribute takes
place by consulting the semantic action enclosed in { }.
This process of execution of code fragment semantic action from
the syntax directed definition can be done by syntax-directed
translation scheme.
A translation scheme generates the output by executing the
semantic actions in an ordered manner.
This process uses the depth first traversal.
SDD refers to grammar with actions embedded into it (i.e. what to do when a production is being derived) while SDT refers to the execution of those actions.
Syntax directed translation refers to the translation of a string into an array of actions. This is done by adding an action to a rule of context-free grammar. It is a type of compiler interpretation. Syntax directed definition is a context-free grammar where attributes and rules are combined together and associated with grammar symbols and productions respectively.
SDD
P R O D U C T I O N SEMANTIC RU LE
E -> Ei + T E.code = Ei.code || T.code || '+‘
SDT
E -» Ei + T { print '+' }