Cse Flat Digital Notes Full 2020 21
Cse Flat Digital Notes Full 2020 21
Cse Flat Digital Notes Full 2020 21
LECTURE NOTES
ON
FORMAL LANGUAGES AND AUTOMATA THEORY
(1805PC07)
III B. Tech I semester (1805PC07)
Prepared by
Course Objectives:
The purpose of this course is to acquaint the student with an overview of the theoretical
Course Outcomes:
• Graduate should be able to understand the concept of abstract machines and their power
• Attains the knowledge of language classes & grammars relationship among them with the
• Graduate will be able to understanding the pre - requisites to the course compile
UNIT - I
UNIT - II
UNIT - III
UNIT - IV
Normal Forms for Context- Free Grammars, Closure Properties of Context-Free Languages.
Types of Normal Forms and it’s conversations.
Introduction to Turing Machines: Turing Machine, Programming Techniques for Turing
Machines, Extensions to the basic Turing Machine, Restricted Turing Machines, Universal
Turing Machine(UTM).
UNIT - V
TEXT BOOKS:
REFERENCE BOOKS:
INDEX
S.NO
TOPIC NAME PAGE.NO
UNIT – I : Introduction to Finite Automata 1
1.1 Structural Representations 3
1.2 The Central Concepts of Automata Theory 4
1.2.1 Alphabet 4
1.2.2 Strings 4
1.2.3 Languages, Problems 5
1.3 Deterministic Finite Automata 6
1.4 Nondeterministic Finite Automata (NFA) 9
1.5 An Application: Text Search 11
1.6 Finite Automata with Epsilon-Transitions 12
1.7 Equivalence of DFAs and NFAs : 16
1.8 Moore and Mealy Machine 20
1.8.1 Moore Machine to Mealy Machine Algorithm 22
1.8.2 Mealy Machine to Moore Machine Algorithm 23
UNIT-II : Regular Expressions 25
2.1 Finite Automata and Regular Expressions 25
2.2 Applications of Regular Expressions 26
2.3 Algebraic Laws for Regular Expressions 27
2.4 Properties of Regular Languages 28
2.5 Pumping Lemma for Regular Languages 31
2.6 Closure Properties of Regular Languages 33
2.7 Decision Properties of Regular Languages 34
2.8 Equivalence and minimization of automata 35
UNIT-III : CFG 45
3.1 Context-Free Grammars 45
3.2 Definition 47
3.3 Derivations Using a Grammar 47
3.4 Leftmost and rightmost derivations 48
3.5 The language of a grammar 49
3.6 Derivation (or Parse) Tree 49
3.7 Sentential Form 52
3.8 Ambiguity in Context Free Grammar 54
3.9 Pushdown Automata (PDA) 59
3.9.1 Instantaneous Description (ID) of PDA 61
3.9.2 Construction of PUSH DOWN AUTOMATA 61
3.9.3 PDA Acceptance 64
3.10 The Languages of a PDA 66
3.11 Equivalence of PDA's and CFG's 67
3.11.1 CFG to PDA Conversion 67
3.11.2 PDA to CFG Conversion 71
UNIT-I
What is Automata?
Automata theory is the study of abstract machine and automata. It is a theory in theoretical
computer science, under discrete mathematics. The word automata (the plural of automaton)
come from the Greek word “avtouata” which means "self-acting".
Automatons are abstract models of machines that perform computations on an input by moving
through a series of states or configurations. At each state of the computation, a transition
function determines the next configuration on the basis of a finite portion of the present
configuration. As a result, once the computation reaches an accepting configuration, it accepts
that input. The most general and powerful automata is the Turing machine.
The major objective of automata theory is to develop methods by which computer scientists can
describe and analyze the dynamic behavior of discrete systems, in which signals are sampled
periodically. The behavior of these discrete systems is determined by the way that the system is
constructed from storage and combinational elements. Characteristics of such machines include:
• Inputs: assumed to be sequences of symbols selected from a finite set I of input signals.
Namely, set I is the set {x1, x,2, x3... xk} where k is the number of inputs.
• Outputs: sequences of symbols selected from a finite set Z. Namely, set Z is the set {y1,
y2, y3 ... ym} where m is the number of outputs.
• A finite state machine has a set of states and two functions called the next-state function
and the output function
• The set of states correspond to all the possible combinations of the internal storage
• The next state function is a combinational logic function that given the inputs and the
current state, determines the next state of the system
• The output function produces a set of outputs from the current state and the inputs
– There are two types of finite state machines
– In a Moore machine, the output only depends on the current state
– WhileinaMealymachine,theoutputdependsboththecurrentstateandthecurrent input
– We are only going to deal with the Moore machine.
– These two types are equivalent in capabilities
• A Finite State Machine consists of:
K states: S ={s1, s2, … ,sk},s1is initial state N
inputs: I={i1, i2, … ,in}
M outputs: O = {o1, o2, … ,om}
Next-state function T(S, I) mapping each current state and input to next state
Output Function P(S) specifies output
Finite Automata
One way to think of this is we might write a program using a NFA, but then when it is
“Compiled” we turn the NFA into an equivalent DFA.
This function
• Takes a state and input symbol as arguments.
• The most important feature of the automaton is its control unit, which can be in any one of
a finite number of interval states at any point. It can change state in some defined manner
determined by a transition function.
Let us first give some intuitive idea about a state of a system and state transitions before
describing finite automata.
Informally, a state of a system is an instantaneous description of that system which gives all
relevant information necessary to determine how the system can evolve from that point on.
Transitions are changes of states that can occur spontaneously or in response to inputs to the
States. Though transitions usually take time, we assume that state transitions are instantaneous
(which is an abstraction).
Some examples of state transition systems are: digital systems, vending machines, etc. A system
containing only a finite number of states and transitions among them is called a finite-state
transition system. Finite-state transition systems can be modeled abstractly by a mathematical
model called finite automation.
1.2 The Central Concepts of Automata Theory:
1.2.1 Alphabet
• Definition − An alphabet is any finite set of symbols.
• Example − ∑ = {a, b, c, d} is an alphabet set where ‘a’, ‘b’, ‘c’, and ‘d’ are symbols.
1.2.2 String
• Definition − A string is a finite sequence of symbols taken from ∑.
• Example − ‘cabcad’ is a valid string on the alphabet set ∑ = {a, b, c, d}
Length of a String
• Definition − It is the number of symbols present in a string. (Denoted by |S|).
• Examples −
o If S = ‘cabcad’, |S|= 6
o If |S|= 0, it is called an empty string (Denoted by λ or ε)
Kleene Star
• Definition − The Kleene star, ∑*, is a unary operator on a set of symbols or strings, ∑,
that gives the infinite set of all possible strings of all possible lengths over ∑including λ.
• Representation − ∑* = ∑0 ∪ ∑1 ∪ ∑2 ∪……. where ∑p is the set of all possible strings of
length p.
• Example − If ∑ = {a, b}, ∑* = {λ, a, b, aa, ab, ba, bb,………..}
1.2.3 Language
• Definition − A language is a subset of ∑* for some alphabet ∑. It can be finite or infinite.
• Example − If the language takes all possible strings of length 2 over ∑ = {a, b}, then L =
{ ab, bb, ba, bb}
An automaton processes a string on the tape by repeating the following actions until the tape
head has traversed the entire string:
1. The tape head reads the current tape cell and sends the symbol s found there to the
control. Then the tape head moves to the next cell.
2. He control takes s and the current state and consults the state transition function to get
the next state, which becomes the new current state.
Once the entire string has been processed, the state in which the automation enters is examined.
If it is an accept state, the input string is accepted; otherwise, the string is rejected . Summarizing
Notes:
• ADFAM= (Q, Σ, į, q0, F) partitions the set Σ* into two sets: L(M) and Σ*-L(M).
• If L=L(M) then L is a subset of L(M) and L(M) is a subset of L.
• Similarly, if L(M1)=L(M2) then L(M1) is as ubset of L(M2) and L(M2)is a subset of
L(M1).
• Some languages are regular, others are not.
For example, if
L1= {x|x is a string of 0's and1's containing an even number of 1's}andL2={x|x=0 n1n for some
n >= 0} Then L1 is regular but L2 is not.
Example
Let a deterministic finite automaton be →
• Q = {a, b, c},
• ∑ = {0, 1},
• q0 = {a},
• F = {c}, and
Present State Next State for Input 0 Next State for Input 1
a a B
b c A
c b C
TRANSITION DIAGRAMS:
1.5 An Application: Text Search Suppose we are given a set of words, called keywords, and we
want to find occurences of any of these words. For such a problem, a useful way to proceed is to
design a NFA which recognizes, by entering in an accepting state, that it has seen one of the
keywords. The NFA is only a nondeterministic program, but we can run it using lists or
transform it to a DFA and get a deterministic (efficient) program Once again, we know that this
DFA will be correct by construction This is a good example of a derivation of a program (DFA)
from a specification (NFA)
The following NFA searches for the keyword web and eBay
w e b
B C
D
A
e
E b a y
F G H
Almost no thinking needed to write this NFA What is a corresponding DFA?? Notice that this
has the same number of states as the NFA
run [] q = return q
Final D = True
Final H = True
Final _ = False
Even for searching an occurence of one keyword this gives an interesting program This is
connected to the Knuth-Morris-Pratt string searching algorithm Better than the naive string
searching algorithm
• Let M = (Q, Σ, δ,q0,F) be an NFA-ε and let w be in Σ*. Then w is accepted by M iff δ^({q0},
w) contains at least one state in F.
• Let M = (Q, Σ, δ,q0,F) be an NFA-ε. Then the language accepted by M is the set:
L(M) = {w | w is in Σ* and δ^({q0},w) contains at least one state in F} 40
• Therefore, if L is a regular language then there exists an NFA-ε M such that L = L(M).
Notes:
δ’: (Q x Σ) –> 2Q is a function
M’ has the same state set, the same alphabet, and the same start state as M
M’ has no ε transitions
• Theorem: Let L be a language. Then there exists an NFA M such that L= L(M) iff there exists an
NFA-ε M’ such that L = L(M’).
• Proof:
(if) Suppose there exists an NFA-ε M’ such that L = L(M’). Then by Lemma 2 there exists an NFA
M such that L = L(M).
(only if) Suppose there exists an NFA M such that L = L(M). Then by Lemma 1 there exists an NFA-
ε M’ such that L = L(M’).
• Corollary: The NFA-ε machines define the regular languages.
• Therefore, if L is a regular language then there exists an NFA M such that L = L(M).
• It follows that NFAs accept all regular languages. But do NFAs accept all?
• Lemma 1: Let M be a DFA. Then there exists a NFA M’ such that L(M) = L(M’).
• Proof: Every DFA is an NFA. Hence, if we let M’ = M, then it follows that L(M’) = L(M).
The above is just a formal statement of the observation from the above example.
• Lemma 2: Let M be an NFA. Then there exists a DFA M’ such that L(M) = L(M’).
• Proof: (sketch)
Let M = (Q, Σ, δ, q0, F).
Define a DFA M’ = (Q’, Σ, δ’, q0, F’) as:
Q’ = 2Q each state in M’ corresponds to a
= {Q0, Q1,…,} subset of states from M
Where Qu = [qi0, qi1,…qij]
F’ = {Qu | Qu contains at least one state in F}
q0 = [q0]
δ’(Qu, a) = Qv iff δ(Qu, a) = Qv
Finite automata may have outputs corresponding to each transition. There are two types
of finite state machines that generate output −
Mealy Machine
Moore Machine
Mealy Machine
A Mealy Machine is an FSM whose output depends on the present state as well as the
present input. It can be described by a 6 tuple (Q, ∑, O, δ, X, q0) where −
Q is a finite set of states.
∑ is a finite set of symbols called the input alphabet.
q0 is the initial state from where any input is processed (q0 ∈ Q).
Moore Machine
Moore machine is an FSM whose outputs depend on only the
present state. A Moore machine can be described by a 6 tuple
(Q,∑, O, δ, X, q0) where −
Q is a finite set of states.
∑ is a finite set of symbols called the
input alphabet. O is a finite set of symbols
called the output alphabet. δ is the input
transition function where δ: Q × Σ → Q X
is the output transition function where X:
Q×Σ→O
q0 is the initial state from where any input is processed (q0 ∈ Q).
The following table highlights the points that differentiate a Mealy Machine from a
Moore Machine.
3)Mealy machines react faster to inputs. In Moore machines, more logic is needed
to decode the outputs since it has more
circuit delays.
Example
Next State
Present a=0 a=1 Output
State
→a d b 1
b a d 0
c c c 0
d b a 1
Step 1 & 2
Next State
d b a
Step 3
Next State
Present State a=0 a=1
Step 1 Calculate the number of different outputs for each state (Qi) that are
available in the state table of the Mealy machine.
Step 2 If all the outputs of Qi are same, copy state Qi. If it has n distinct
outputs, break Qi into n states as Qin where n = 0, 1, 2....…
Step 3 If the output of the initial state is 1, insert a new initial state at the
beginning which gives 0 output.
Example
Here, states ‘a’ and ‘d’ give only 1 and 0 outputs respectively, so we retain states ‘a’ and ‘d’.
But states ‘b’ and ‘c’ produce different outputs 1and 0. So, we divide b into b0, b1 and c
into c0, c1.
Next State
Present State Output
a=0 a=1
→a d b1 1
b0 a d 0
b1 a d 1
c0 c1 C0 0
c1 c1 C0 1
d b0 a 0
Unit II
Regular Expressions
Definition:
A Regular Expression can be recursively defined as follows −
• ε is a Regular Expression indicates the language containing an empty string. (L (ε) = {ε})
• If we apply any of the rules several times from 1 to 5, they are Regular Expressions.
• Examples:LetΣ ={0, 1}
1. Øu =uØ =Ø Multiplyby0
2. İu=uİ=u Multiplyby1
3. Ø*=İ
4. İ*=İ
5. u+v = v+u
6. u +Ø =u
7. u +u =u
8. u* = (u*)*
9. u(v+w) = uv+uw
10. (u+v)w = uw+vw
11. (uv)*u=u(vu)*12.
(u+v)* = (u*+v)*
= u*(u+v)*
= (u+vu*)
= (u*v*)*
= u*(vu*)*
= (u*v)*u
2.4 Properties of Regular Languages
Our main goal is to identify some basic closure properties of regular languages.
• We will be interested in the following types of closure properties.
• Suppose you give me two arbitrary regular languages L and L'.
• Suppose I perform some kind of operation on L and L' such as the set union operation.
• We want to know if the resulting language L union L' is guaranteed to still be regular.
• If it is, we say the class of regular languages has the property of being closed under the set
union operation.
• We will often abbreviate this to say that the class of regular languages is closed under
union.
A second goal is to illustrate the basic methods used to prove such closure properties.
• First, we must define the operation (e.g. set union, set complement) we are interested in.
• We will then create a generic element proof to show that the class of regular languages is
closed under the particular operation.
• Key point:
▪ Note we apply these set operations to individual languages (sets of strings) rather
than to language classes (sets of sets).
▪ However, we are interested in whether or not the resulting language (set of strings)
is still a member of a language class (set of sets).
Definition
The complement Sc of a set S is defined to be the set of all elements of the universe U that are not
elements of S.
Note complement is a unary operator; that is, it is a function on a single input.
In contrast, set union is a binary operator; that is, set union is a function on two inputs.
Examples of languages L (sets of strings) and their complements Lc.
*
L=
Lc = {}.
L = {the set of strings of even length}
Lc = {the set of strings not of even length} = {the set of strings of odd length}.
L = {the set of strings beginning with 01}
Lc = {the set of strings not beginning with 01} = {the set of strings beginning with 00, 10, 11}
union {0, 1, }
Generic element proof that the class of regular languages is closed under complement.
Observation:
▪ Looking at the proof, we see that a regular language L and its complement Lc are
arguably identical in complexity since essentially the same FA can recognize either
language.
▪ This is NOT always the case for other classes of languages, a fact we will see in the
next section on context free languages.
Questions
1) What does the statement that the regular languages are closed under complement mean?
2) Suppose L is a non-regular language. Can we conclude that Lc is not regular?
▪ L is any language
L' = Lc, the complement of L
*
L union L' =
▪ L = {the set of strings beginning with 01}
L' = {the set of strings beginning with 00}
L union L' = {the set of strings beginning with 0 except 0 itself}
Generic element proof that the class of regular languages is closed under union.
Questions
3) What does the statement that the regular languages are closed under set union mean?
4) Suppose L and L' are non-regular languages. Can we conclude that L union L' is not regular?
Definition:
S - S' is defined to be the set of all elements of the universe U that are elements
of S but not elements of S'.
▪ Note set difference is a binary operator; that is, it is a function on two inputs.
Examples of languages L and L' (sets of strings) and their difference L - L'.
▪ L ={aab,aaba,aabaa}
L' ={b,aaba,bbb}
L - L' = {aab,aabaa}
▪ L is any language
L' = Lc, the complement of L
L - L' = L
▪ L = {the set of strings beginning with 0}
L' = {the set of strings ending with 0}
L - L' = {the set of strings beginning with 0 which do not end with 0}
Examine the generic element proof that the class of regular languages is closed under intersection
and determine how to modify it to show that the class of regular languages is closed under set
difference.
• Pumping Lemma relates the size of string accepted with the number of states in a DFA
• What is the largest string accepted by a DFA with n state?
• Suppose there is no loop?
Now, if there is a loop, what type of strings is accepted via the loop(s)?
• Lemma: (the pumping lemma)
Let M be a DFA with |Q|=n states. If there exists a string x in L(M), such that |x| n, then
there exists a way to write it as x=uvw, where u, v, and w are all in Σ*and:
– 1 |uv| n
– |v| 1
– Such that, the strings uviw are also in L(M), for all i 0
• Let:
– u = a1…as
– v = as+1…at
• In addition, let:
– w = at+1…am
In other words, when processing the accepted string x, the loop was traversed once, but could
have been traversed as many times as desired, and the resulting string would still be accepted.
A decision property for a class of languages is an algorithm that takes a formal description of a
language (e.g., a DFA) and tells whether or not some property holds.
• Representation Matters You might imagine that the language is described informally, so
if my description is “the empty language” then yes, otherwise no. But the representation is a
DFA (or a RE that you will convert to a DFA). for DFA A?Can you tell if L(A) =
• I.e., do two DFA’s define the same language? You can’t find a “smallest.”
• Lemma 1: Let r be a regular expression. Then there exists an NFA-ε M such that L(M) = L(r).
Furthermore, M has exactly one final state with no transitions out of it.
• Proof: (by induction on the number of operators, denoted by OP(r), in r).
• Basis: OP(r) = 0
Then r is either Ø, ε, or a, for some symbol a in Σ
• Inductive Hypothesis: Suppose there exists a k 0 such that for any regular expression r
where 0 OP(r) k, there exists an NFA-ε such that L(M) = L(r). Furthermore, suppose that M
has exactly one final state.
• Inductive Step: Let r be a regular expression with k + 1 operators (OP(r) = k + 1), where k + 1
>= 1.
Case 1) r = r1 + r2
Since OP(r) = k +1, it follows that 0<= OP(r1), OP(r2) <= k. By the inductive hypothesis there
exist NFA-ε machines M1 and M2 such that L(M1) = L(r1) and L(M2) = L(r2). Furthermore,
both M1 and M2 have exactly one final state.
Case 2) r = r1r2
Since OP(r) = k+1, it follows that 0<= OP(r1), OP(r2) <= k. By the inductive hypothesis there
exist NFA-ε machines M1 and M2 such that L(M1) = L(r1) and L(M2) = L(r2). Furthermore,
both M1 and M2 have exactly one final state.
Case 3) r = r1*
Since OP(r) = k+1, it follows that 0<= OP(r1) <= k. By the inductive hypothesis there exists an NFA-
ε machine M1 such that L(M1) = L(r1). Furthermore, M1 has exactly one final state.
Example:
Problem: Construct FA equivalent to RE, r = 0(0+1)*
Solution: r = r1r2
r1 = 0
r2 = (0+1)*
r2 = r3*
r3 = 0+1
r3 = r4 + r5
r4 = 0
r5 = 1
Transition graph:
• Observations:
Arden’s Theorem:
In order to find out a regular expression of a Finite Automaton, we use Arden’s Theorem along
with the properties of regular expressions.
Statement −
If P does not contain null string, then R = Q + RP has a unique solution that is R = QP*
Proof −
= Q + QP + RPP
When we put the value of R recursively again and again, we get the following equation −
R = Q + QP + QP2 + QP3…..
R = Q (ε + P + P2 + P3 + …. )
Hence, proved.
Method
Step 1 − Create equations as the following form for all the states of the DFA having n states with
initial state q1.
…………………………
…………………………
…………………………
…………………………
Rij represents the set of labels of edges from qi to qj, if no such edge exists, then Rij = ∅
Step 2 − Solve these equations to get the equation for the final state in terms of Rij
We can use Thompson's Construction to find out a Finite Automaton from a Regular Expression.
We will reduce the regular expression into smallest regular expressions and converting these to
NFA and finally to DFA.
Method
Step 1 Construct an NFA with Null moves from the given regular expression.
Step 2 Remove Null transition from the NFA and convert it into its equivalent DFA.
Problem
Solution
Now we will remove the ε transitions. After we remove the εtransitions from the NDFA, we get
the following −
A Finite Automaton with null moves (FA-ε) does transit not only after giving input from the
alphabet set but also without any input symbol. This transition without input is called a null
move.
• q0 − an initial state q0 ∈ Q
If in an NDFA, there is ϵ-move between vertex X to vertex Y, we can remove it using the
following steps −
• Copy all these edges starting from X without changing the edge labels.
Problem
Solution
Step 1 −
Step 2 −
Now we will Copy all these edges from q1 without changing the edges from qf and get the
following FA −
Step 3 −
So the FA becomes −
Step 4 −
So the FA becomes −
Unit III
CONTEXT-FREE GRAMMARS
3.1 REGULAR GRAMMAR
A linear grammar is a grammar in which at most one variable can occur on the right side of any
production without restriction on the position of this variable.
S ->S1ab,
S1 -> S1ab | S2,
S2 ->a.
A regular grammar is one that is either right-linear or left-liner.
To test if L = M
1. Convert both L and M to DFA's.
2. Imagine the DFA that is the union of the two DFA's (never mind there are two start states)
3. If TF-algo says that the two start states are distinguishable, then L 6= M, otherwise, L = M.
Example:
We can “see" that both DFA accept L(ε+(0+1)*0). The result of the TF-algo is
• It's not hard to show that regular grammars generate and nfa's accept the same class of
languages: the regular languages!
• Example:
Let S → aA A → abS | b
3.2CONTEXT FREE-GRAMMAR
• Example#1 CFG:
You use a grammar by starting with the start non terminal and performing a sequence of rewrites
until all of the non terminals are gone. Each sequence of tokens derivable that way is an allowed
sequence of tokens in the language.
• Example Derivations:
S => 0S1 (1) S => ε (2)
=> 01 (2)
S => 0S1 (1)
E ⇒ E+E
⇒ n+E
⇒ n+E*E
⇒ n+(E)*E
⇒ n+(n)*E
⇒ n+(n)*n
is a leftmost derivation of n + ( n ) * n.
A rightmost derivation is a derivation that always selects the rightmost nonterminal to rewrite.
For example,
E ⇒ E+E
⇒ E+E*E
⇒ E+E*n
⇒ E+(E)*n
⇒ E+(n)*n
⇒ n+(n)*n
is a rightmost derivation of n + ( n ) * n.
Say that α ⇒*G β if it is possible to rewrite string α into β by doing a sequence of zero or more
rewrite steps using the productions of G. For example, we saw above that, using the expression
grammar G above,
E ⇒*G n + n
A grammar describes a language. If G is a grammar with start nonterminal S, then the language
of G is the set of all sequences of tokens α such that S⇒*G α.
• Definition: Let G = (V, T, P, S) be a CFG. A tree is a derivation (or parse) tree if:
– Every vertex has a label from V U T U {ε}
– The label of the root is S
– If a vertex with label A has children with labels X1, X2,…, Xn, from left to right, then
• More Generally, a derivation tree can be defined with any non-terminal as the root.
Representation Technique:
If S → x1x2 …… xn is a production rule in a CFG, then the parse tree / derivation tree will be as
follows −
Top-down Approach −
Bottom-up Approach −
The derivation or the yield of a parse tree is the final string obtained by concatenating the labels
of the leaves of the tree from left to right, ignoring the Nulls. However, if all the leaves are Null,
derivation is Null.
Example
A partial derivation tree is a sub-tree of a derivation tree/parse tree such that either all of its
children are in the sub-tree or none of them are in the sub-tree.
If a partial derivation tree contains the root S, it is called a sentential form. The above sub-tree
is also in sentential form.
Example
Definition: Let G be a CFG. Then G is said to be ambiguous if there exists an x in L(G) with >1
leftmost derivations. Equivalently, G is said to be ambiguous if there exists an x in L(G) with >1
parse trees, or >1 rightmost derivations.
• Note: Given a CFL L, there may be more than one CFG G with L = L(G). Some ambiguous
and some not.
• Definition: Let L be a CFL. If every CFG G with L = L(G) is ambiguous, then L is inherently
ambiguous.
The string has two left-most derivations, and therefore has two distinct parse trees and is ambiguous.
S→ xAy → w
With x, y (V T) .
In words, a variable is useful if and only if it occurs in at least on derivation. A variable that is
not useful is called useless. A production is useless if it involves any useless variable
S→A
A → aA |
B → bA
B is useless so is the production B → bA! Observe that, even though a terminal string can be
derived from B, there is no way to get to B from S, i.e. cannot achieve
S → xBy.
• Example:
P consists of
S → aS | A | C
A→ a
B → aa
C → aCb
First, note that the variable C cannot lead to any terminal string, we can then remove C and its
associated productions, we get G1 with V1 = {S, A, B}, T1 = {a} and P1 consisting of
S → aS | A
A→a
B → aa
Next, we identify variables that cannot be reached from the start variable. We can create a
dependency graph for V1. For a context-free grammar, a dependency graph has its vertices
labeled with variables with an edge between any two vertices I and J if there is a production of
the form
I → xJy
Consequently, the variable B is shown to be useless and can be removed together with its
associated production.
The resulting grammar G’ = (V’, T’, S, P’) is with V’ = {S, A}, T’ = {a} and P’ consisting of
S -> aS | A
A -> a
Eliminations of null-Production
• Definition :
a) Any production of a context-free grammar of the form
A -> ᵋ
is called a -production.
• Example:
Consider the grammar, G with productions
S ->aS1b
S1 -> aS1b | ᵋ
L(G) = {anbn | n -> 1} which is a ᵋ -free language. The ᵋ -production can be removed after
adding new productions obtained by substituting ᵋ for S1 on the right hand side.
We get an equivalent G’ with productions
S -> aS1b | ab
S1 -> aS1b | ab
• Theorem:
Let G be any context-free grammar with ᵋ L(G). There exists an equivalent grammar G’ without
null-productions.
Proof :
Find the set VN of all null able variables of G
1. For all productions A -> ᵋ, put A in VN
2. Repeat the following step until no further variables are added to VN:
For all productions
B -> A1A2…An
where A1, A2, …, An are in VN, put B in VN.
With the resulting VN, P’ can be constructed by looking at all productions in P of the form
A -> x1x2…xm, m -> 1
where each xi V T.
• an input tape,
A PDA may or may not read an input symbol, but it has to read the top of the stack in every
transition
• ∑ is input alphabet
• S is stack symbols
The following diagram shows a transition in a PDA from a state q1 to state q2, labeled as a,b →
c−
This means at state q1, if we encounter an input string ‘a’ and top symbol of the stack is ‘b’,
then we pop ‘b’, push ‘c’ on top of the stack and move to state q2.
In final state acceptability, a PDA accepts a string when, after reading the entire string, the PDA
is in a final state. From the starting state, we can make moves that end up in a final state with
any stack values. The stack values are irrelevant as long as we end up in a final state.
For a PDA (Q, ∑, S, δ, q0, I, F), the language accepted by the set of final states F is −
Here a PDA accepts a string when, after reading the entire string, the PDA has emptied its stack.
For a PDA (Q, ∑, S, δ, q0, I, F), the language accepted by the empty stack is −
ID is an informal notation of how a PDA computes an input string and make a decision that
string is accepted or rejected.
Solution: In this language, n number of a's should be followed by 2n number of b's. Hence, we
will apply a very simple logic, and that is if we read single 'a', we will push two a's onto the
stack. As soon as we read 'b' then for every single 'b' only one 'a' should get popped from the
stack.
Now when we read b, we will change the state from q0 to q1 and start popping corresponding
'a'. Hence,
δ(q0, b, a) = (q1, ε)
Thus this process of popping 'b' will be repeated unless all the symbols are read. Note that
popping action occurs in state q1 only.
δ(q1, b, a) = (q1, ε)
After reading all b's, all the corresponding a's should get popped. Hence when we read ε as
input symbol then there should be nothing in the stack. Hence the move will be:
δ(q1, ε, Z) = (q2, ε)
Where
PDA = ({q0, q1, q2}, {a, b}, {a, Z}, δ, q0, Z, {q2})
δ(q0, b, a) = (q1, ε)
δ(q1, b, a) = (q1, ε)
δ(q1, ε, Z) = (q2, ε)
Now we will simulate this PDA for the input string "aaabbbbbb".
⊢ δ(q1, b, aZ)
⊢ δ(q1, ε, Z)
⊢ δ(q2, ε)
ACCEPT
Solution: In this PDA, n number of 0's are followed by any number of 1's followed n number of
0's. Hence the logic for design of such PDA will be as follows:
Push all 0's onto the stack on encountering first 0's. Then if we read 1, just do nothing. Then
read 0, and on each read of 0, pop one 0 from the stack.
δ(q0, 1, 0) = δ(q1, 0)
δ(q0, 1, 0) = δ(q1, 0)
δ(q1, 0, 0) = δ(q1, ε)
Now we will simulate this PDA for the input string "0011100".
⊢ δ(q1, 0, 0Z)
⊢ δ(q1, ε, Z)
⊢ δ(q2, Z)
ACCEPT
1. Acceptance by Final State: The PDA is said to accept its input by the final state if it enters
any final state in zero or more moves after reading the entire input.
Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by the final state can be
defined as:
2. Acceptance by Empty Stack: On reading the input string from the initial configuration for
some PDA, the stack of PDA gets empty.
Let P =(Q, ∑, Γ, δ, q0, Z, F) be a PDA. The language acceptable by empty stack can be defined
as:
If L = N(P1) for some PDA P1, then there is a PDA P2 such that L = L(P2). That means the
language accepted by empty stack PDA will also be accepted by final state PDA.
If there is a language L = L (P1) for some PDA P1 then there is a PDA P2 such that L = N(P2).
That means language accepted by final state PDA is also acceptable by empty stack PDA.
PROBLEM: Construct a PDA that accepts the language L over {0, 1} by empty stack which
accepts all the string of 0's and 1's in which a number of 0's are twice of number of 1's.
We are going to design the first part i.e. 1 comes before 0's. The logic is that read single 1 and
push two 1's onto the stack. Thereafter on reading two 0's, POP two 1's from the stack. The δ
can be
δ(q0, 0, 1) = (q0, ε)
Now, consider the second part i.e. if 0 comes before 1's. The logic is that read first 0, push it
onto the stack and change state from q0 to q1. [Note that state q1 indicates that first 0 is read
and still second 0 has yet to read].
Being in q1, if 1 is encountered then POP 0. Being in q1, if 0 is read then simply read that
second 0 and move ahead. The δ will be:
δ(q1, 0, 0) = (q1, 0)
δ(q1, 0, Z) = (q0, ε) (indicate that one 0 and one 1 is already read, so simply
read the second 0)
δ(q1, 1, 0) = (q1, ε)
δ(q0, 0, 1) = (q1, ε)
δ(q1, 0, 0) = (q1, 0)
δ(q1, 0, Z) = (q0, ε)
• Union
• Concatenation
Union
Let L1 and L2 be two context free languages. Then L1 ∪ L2 is also context free.
Example
Concatenation
If L1 and L2 are context free languages, then L1L2 is also context free.
Example
Kleene Star
Example
• Complement − If L1 is a context free language, then L1’ may not be context free.
The first symbol on R.H.S. production must be a terminal symbol. The following steps
are used to obtain PDA from CFG is:
Step 3: The initial symbol of CFG will be the initial symbol in the PDA.
δ(q, ε, A) = (q, α)
PROBLEM-1: Convert the following grammar to a PDA that accepts the same language.
S → 0S1 | A
A → 1A0 | S | ε
S → 0S1 | 1S0 | ε
S → 0SX | 1SY | ε
X→1
Y→0
PROBLEM- 2: Construct PDA for the given CFG, and test whether 0104 is acceptable by this
PDA.
S → 0BB
B → 0S | 1S | 0
⊢ δ(q, 10000,1SB) R3
⊢ δ(q, 0, B) R3
⊢ δ(q, 0, 0) R2
⊢ δ(q, ε) R3
ACCEPT
S → aSb
S→a|b|ε
⊢ δ(q, b, b) R4
⊢ δ(q, ε, z0) R5
⊢ δ(q, ε)
ACCEPT
Example: Consider the PDA PN = ({q}, {0, 1}, {Z, A, B}, δN, q, Z) in Figure 2. The
corresponding context-free grammar G = (V, {0, 1}, R, S) is given by:
1. S ! [qZq]
and
Unit IV
NORMAL FORMS
The definition of context free grammars (CFGs) allows us to develop a wide variety of
grammars. Most of the time, some of the productions of CFGs are not useful and are redundant.
This happens because the definition of CFGs does not restrict us from making these redundant
productions.
By simplifying CFGs we remove all these redundant productions from a grammar , while
keeping the transformed grammar equivalent to the original grammar. Two grammars are called
equivalent if they produce the same language. Simplifying CFGs is necessary to later convert
them into Normal forms.
Types of redundant productions and the procedure of removing them are mentioned below.
4.2.1 Useless productions – The productions that can never take part in derivation of any string ,
are called useless productions. Similarly , a variable that can never take part in derivation of any
string is called a useless variable. For eg.
A -> cd
B -> aB
C -> dc
In the example above , production ‘C -> dc’ is useless because the variable ‘C’ will never occur
in derivation of any string. The other productions are written in such a way that variable ‘C’ can
never reached from the starting variable ‘S’.
Production ‘B ->aB’ is also useless because there is no way it will ever terminate . If it never
terminates , then it can never produce a string. Hence the production can never take part in any
derivation.
To remove useless productions , we first find all the variables which will never lead to a terminal
string such as variable ‘B’. We then remove all the productions in which variable ‘B’ occurs.
So the modified grammar becomes –
A -> cd
C -> dc
We then try to identify all the variables that can never be reached from the starting variable such
as variable ‘C’. We then remove all the productions in which variable ‘C’ occurs.
The grammar below is now free of useless productions –
A -> cd
4.2.2. є or λ productions – The productions of type ‘A -> λ’ are called λ productions ( also
called lambda productions and null productions) . These productions can only be removed from
those grammars that do not generate λ (an empty string). It is possible for a grammar to contain
null productions and yet not produce an empty string.
To remove null productions , we first have to find all the nullable variables. A variable ‘A’ is
called nullable if λ can be derived from ‘A’. For all the productions of type ‘A -> λ’ , ‘A’ is a
nullable variable. For all the productions of type ‘B -> A1A2…An ‘ , where all ’Ai’s are nullable
variables , ‘B’ is also a nullable variable.
After finding all the nullable variables, we can now start to construct the null production free
grammar. For all the productions in the original grammar , we add the original production as well
as all the combinations of the production that can be formed by replacing the nullable variables
in the production by λ. If all the variables on the RHS of the production are nullable , then we do
not add ‘A -> λ’ to the new grammar. An example will make the point clear. Consider the
grammar –
S -> ABCd
A -> BC
B -> bB | λ
C -> cC | λ
Lets first find all the nullable variables. Variables ‘B’ and ‘C’ are clearly nullable because they
contain ‘λ’ on the RHS of their production. Variable ‘A’ is also nullable because in (2) , both
variables on the RHS are also nullable. Similarly , variable ‘S’ is also nullable. So variables ‘S’ ,
‘A’ , ‘B’ and ‘C’ are nullable variables.
Lets create the new grammar. We start with the first production. Add the first production as it is.
Then we create all the possible combinations that can can be formed by replacing the nullable
variables with λ. Therefore line (1) now becomes ‘S -> ABCd | ABd | ACd | BCd | Ad | Bd |Cd |
d ’.We apply the same rule to line (2) but we do not add ‘A -> λ’ even though it is a possible
combination. We remove all the productions of type ‘V -> λ’. The new grammar now becomes –
A -> BC | B | C
B -> bB | b
C -> cC | c
4.2.3 Unit productions – The productions of type ‘A -> B’ are called unit productions.
To create a unit production free grammar ‘Guf’ from the original grammar ‘G’ , we follow the
procedure mentioned below.
First add all the non-unit productions of ‘G’ in ‘Guf’. Then for each variable ‘A’ in grammar ‘G’
, find all the variables ‘B’ such that ‘A *=> B’. Now , for all variables like ‘A ’ and ‘B’, add ‘A -
> x1 | x2 | …xn’ to ‘Guf’ where ‘B -> x1 | x2 | …xn ‘ is in ‘Guf’ . None of the x1 , x2 … xn are
single variables because we only added non-unit productions in ‘Guf’. Hence the resultant
grammar is unit production free. For eg.
S -> Aa | B
A -> b | B
B -> A | a
Lets add all the non-unit productions of ‘G’ in ‘Guf’. ‘Guf’ now becomes –
S -> Aa
A -> b
B -> a
Now we find all the variables that satisfy ‘X *=> Z’. These are ‘S *=> A’ , ‘S*=>B’, ‘A *=> B’
and ‘B *=> A’. For ‘A *=> B’ , we add ‘A -> a’ because ‘B ->a’ exists in ‘Guf’. ‘Guf’ now
becomes
S -> Aa
A -> b | a
B -> a
For ‘B *=> A’ , we add ‘B -> b’ because ‘A -> b’ exists in ‘Guf’. The new grammar now
becomes
S -> Aa
A -> b | a
B -> a | b
We follow the same step for ‘S *=> A’ and ‘S*=>B’ and finally get the following grammar –
S -> Aa | b | a
A -> b | a
B -> a | b
Note: To remove all kinds of productions mentioned above, first remove the null productions,
then the unit productions and finally , remove the useless productions. Following this order is
very important to get the correct result.
Context Free languages are accepted by pushdown automata but not by finite automata. Context
free languages can be generated by context free grammar which has the form :
Concatenation : If L1 and If L2 are two context free languages, their concatenation L1.L2 will
also be context free. For example,
L1 = {anbn | n >= 0 } and L2 = { cmdm | m >= 0 }
L3 = L1.L2 = {anbncmdm | m >= 0 and n >= 0} is also context free.
L1 says number of a’s should be equal to number of b’s and L2 says number of c’s should be
equal to number of d’s. Their concatenation says first number of a’s should be equal to number
of b’s, then number of c’s should be equal to number of d’s. So, we can create a PDA which will
first push for a’s, pop for b’s, push for c’s then pop for d’s. So it can be accepted by pushdown
automata, hence context free.
Kleene Closure : If L1 is context free, its Kleene closure L1* will also be context free. For
example,
L1 = { anbn | n >= 0 }
L1* = {anbn | n >= 0 }* is also context free.
Intersection and complementation : If L1 and If L2 are two context free languages, their
intersection L1 ∩ L2 need not be context free. For example,
L1 = { anbncm | n >= 0 and m >= 0 } and L2 = (ambncn | n >= 0 and m >= 0 }
L3 = L1 ∩ L2 = {anbncn | n >= 0 } need not be context free.
L1 says number of a’s should be equal to number of b’s and L2 says number of b’s should be
equal to number of c’s. Their intersection says both conditions need to be true, but push down
automata can compare only two. So it cannot be accepted by pushdown automata, hence not
context free.
Similarly, complementation of context free language L1 which is ∑* – L1, need not be context
free.
A context free grammar (CFG) is in Chomsky Normal Form (CNF) if all production rules satisfy
one of the following conditions:
The grammar G1 is in CNF as production rules satisfy the rules specified for CNF. However, the
grammar G2 is not in CNF as the production rule S->aZ contains terminal followed by non-
terminal which does not satisfy the rules specified for CNF.
Note –
CNF is used as a preprocessing step for many algorithms for CFG like CYK(membership algo),
bottom-up parsers etc.
For generating string w of length ‘n’ requires ‘2n-1’ production or steps in CNF.
Any Context free Grammar that do not have ε in it’s language has an equivalent CNF.
Step 3. Eliminate terminals from RHS if they exist with other terminals or non-terminals. e.g,;
production rule X->xY can be decomposed as:
X->ZY
Z->x
X->PZ
P->XY
Example – Let us take an example to convert CFG to CNF. Consider the given grammar G1:
S → ASB
A → aAS|a|ε
B → SbS|A|bb
Step 1. As start symbol S appears on the RHS, we will create a new production rule S0->S.
Therefore, the grammar will become:
S0->S
S → ASB
A → aAS|a|ε
B → SbS|A|bb
Step 2. As grammar contains null production A-> ε, its removal from the grammar yields:
S0->S
S → ASB|SB
A → aAS|aS|a
B → SbS| A|ε|bb
Now, it creates null production B→ ε, its removal from the grammar yields:
S0->S
S → AS|ASB| SB| S
A → aAS|aS|a
B → SbS| A|bb
Now, it creates unit production B->A, its removal from the grammar yields:
S0->S
S → AS|ASB| SB| S
A → aAS|aS|a
B → SbS|bb|aAS|aS|a
S → AS|ASB| SB| S
A → aAS|aS|a
B → SbS|bb|aAS|aS|a
Also, removal of unit production S->S and S0->S from grammar yields:
S0-> AS|ASB| SB
S → AS|ASB| SB
A → aAS|aS|a
B → SbS|bb|aAS|aS|a
Step 3. In production rule A->aAS |aS and B-> SbS|aAS|aS, terminals a and b exist on RHS with
non-terminates. Removing them from RHS:
S0-> AS|ASB| SB
S → AS|ASB| SB
A → XAS|XS|a
B → SYS|bb|XAS|XS|a
X →a
Y→b
S0-> AS|ASB| SB
S → AS|ASB| SB
A → XAS|XS|a
B → SYS|VV|XAS|XS|a
X→a
Y→b
V→b
Step 4: In production rule S0->ASB, RHS has more than two symbols, removing it from
grammar yields:
S0-> AS|PB| SB
S → AS|ASB| SB
A → XAS|XS|a
B → SYS|VV|XAS|XS|a
X→a
Y→b
V→b
P → AS
Similarly, S->ASB has more than two symbols, removing it from grammar yields:
S0-> AS|PB| SB
S → AS|QB| SB
A → XAS|XS|a
B → SYS|VV|XAS|XS|a
X→a
Y→b
V→b
P → AS
Q → AS
Similarly, A->XAS has more than two symbols, removing it from grammar yields:
S0-> AS|PB| SB
S → AS|QB| SB
A → RS|XS|a
B → SYS|VV|XAS|XS|a
X→a
Y→b
V→b
P → AS
Q → AS
R → XA
Similarly, B->SYS has more than two symbols, removing it from grammar yields:
S0 -> AS|PB| SB
S → AS|QB| SB
A → RS|XS|a
B → TS|VV|XAS|XS|a
X→a
Y→b
V→b
P → AS
Q → AS
R → XA
T → SY
Similarly, B->XAX has more than two symbols, removing it from grammar yields:
S0-> AS|PB| SB
S → AS|QB| SB
A → RS|XS|a
B → TS|VV|US|XS|a
X→a
Y→b
V→b
P → AS
Q → AS
R → XA
T → SY
A context free grammar (CGF) is in Greibach Normal Form (GNF) if all production rules satisfy
one of the following conditions:
The grammar G1 is in GNF as production rules satisfy the rules specified for GNF. However, the
grammar G2 is not in GNF as the production rules B-> ε and A-> ε do not satisfy the rules
specified for GNF (only start symbol can generate ε).
Note –
If the given grammar is not in CNF, convert it to CNF. You can refer following article to convert
CFG to CNF: Converting Context Free Grammar to Chomsky Normal Form
If CFG contains left recursion, eliminate them. You can refer following article to eliminate left
recursion: Parsing | Set 1 (Introduction, Ambiguity and Parsers)
If any production rule is not in the GNF form, convert them. Let us take an example to convert
CFG to GNF. Consider the given grammar G1:
S → XA|BB
B → b|SB
X→b
A→a
As G1 is already in CNF and there is not left recursion, we can skip step 1and 2 and directly
move to step 3.
The production rule B->SB is not in GNF, therefore, we substitute S -> XA|BB in production
rule B->SB as:
S → XA|BB
B → b|XAB|BBB
X→b
A→a
The production rules S->XA and B->XAB is not in GNF, therefore, we substitute X->b in
production rules S->XA and B->XAB as:
S → bA|BB
B → b|bAB|BBB
X→b
A→a
S → bA|BB
B → bC|bABC
C → BBC| ε
X→b
A→a
S → bA|BB
B → bC|bABC|b|bAB
C → BBC|BB
X→b
A→a
S → bA| bCB|bABCB|bB|bABB
B → bC|bABC|b|bAB
C → BBC|BB
X→b
A→a
S → bA| bCB|bABCB|bB|bABB
B → bC|bABC|b|bAB
C → BBC
C → bCB|bABCB|bB|bABB
X→b
A→a
S → bA| bCB|bABCB|bB|bABB
B → bC|bABC|b|bAB
C → bCBC|bABCBC|bBC|bABBC
C → bCB|bABCB|bB|bABB
X→b
A→a
TURING MACHINE
Turing machine was invented in 1936 by Alan Turing. It is an accepting device which accepts
Recursive Enumerable Language generated by type 0 grammar.
The mapping function shows the mapping from states of finite automata and input symbol on the
tape to the next states, external symbols and the direction for moving the tape head. This is
known as a triple or a program for turing machine.
(q0, a) → (q1, A, R)
That means in q0 state, if we read symbol 'a' then it will go to state q1, replaced a by X and move
ahead right(R stands for right).
The turning machine can be modelled with the help of the following representation.
1. The input tape is having an infinite number of cells, each cell containing one input symbol and
thus the input string can be placed on tape. The empty tape is filled by blank characters.
2. The finite control and the tape head which is responsible for reading the current input symbol.
The tape head can move to left to right.
4. Finite set of symbols called external symbols which are used in building the logic of turing
machine.
The turing machine accepts all the language even though they are recursively enumerable.
Recursive means repeating the same set of rules for any number of times and enumerable means
a list of elements. The TM also accepts the computable functions, such as addition,
multiplication, subtraction, division, power function, and many more.
EXAMPLE:Construct a turing machine which accepts the language of aba over ∑ = {a, b}.
Solution: We will assume that on input tape the string 'aba' is placed like this:
The tape head will read out the sequence up to the Δ characters. If the tape head is readout 'aba'
string then TM will halt after reading Δ.
Now, we will see how this turing machine will work for aba. Initially, state is q0 and head points
to a as:
The move will be δ(q0, a) = δ(q1, A, R) which means it will go to state q1, replaced a by A and
head will move to right as:
The move will be δ(q1, b) = δ(q2, B, R) which means it will go to state q2, replaced b by B and
head will move to right as:
The move will be δ(q2, a) = δ(q3, A, R) which means it will go to state q3, replaced a by A and
head will move to right as:
The move δ(q3, Δ) = (q4, Δ, S) which means it will go to state q4 which is the HALT state and
HALT state is always an accept state for any TM.
States a b Δ
q0 (q1, A, R) – –
q1 – (q2, B, R) –
q2 (q3, A, R) – –
q3 – – (q4, Δ, S)
q4 – – –
Now, we will see how this Turing machine will work for 001122. Initially, state is q0 and head
points to 0 as:
The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A and
head will move to the right as:
The move will be δ(q1, 0) = δ(q1, 0, R) which means it will not change any symbol, remain in
the same state and move to the right as:
The move will be δ(q1, 1) = δ(q2, B, R) which means it will go to state q2, replaced 1 by B and
head will move to right as:
The move will be δ(q2, 1) = δ(q2, 1, R) which means it will not change any symbol, remain in
the same state and move to right as:
The move will be δ(q2, 2) = δ(q3, C, R) which means it will go to state q3, replaced 2 by C and
head will move to right as:
Now move δ(q3, 2) = δ(q3, 2, L) and δ(q3, C) = δ(q3, C, L) and δ(q3, 1) = δ(q3, 1, L) and δ(q3,
B) = δ(q3, B, L) and δ(q3, 0) = δ(q3, 0, L), and then move δ(q3, A) = δ(q0, A, R), it means will
go to state q0, replaced A by A and head will move to right as:
The move will be δ(q0, 0) = δ(q1, A, R) which means it will go to state q1, replaced 0 by A, and
head will move to right as:
The move will be δ(q1, B) = δ(q1, B, R) which means it will not change any symbol, remain in
the same state and move to right as:
The move will be δ(q1, 1) = δ(q2, B, R) which means it will go to state q2, replaced 1 by B and
head will move to right as:
The move will be δ(q2, C) = δ(q2, C, R) which means it will not change any symbol, remain in
the same state and move to right as:
The move will be δ(q2, 2) = δ(q3, C, L) which means it will go to state q3, replaced 2 by C and
head will move to left until we reached A as:
immediately before B is A that means all the 0's are market by A. So we will move right to
ensure that no 1 or 2 is present. The move will be δ(q2, B) = (q4, B, R) which means it will go to
state q4, will not change any symbol, and move to right as:
The move will be (q4, B) = δ(q4, B, R) and (q4, C) = δ(q4, C, R) which means it will not change
any symbol, remain in the same state and move to right as:
The move δ(q4, X) = (q5, X, R) which means it will go to state q5 which is the HALT state and
HALT state is always an accept state for any TM.
PROBLEM-2: Construct a TM machine for checking the palindrome of the string of even
length.
Solution:
Firstly we read the first symbol from the left and then we compare it with the first symbol from
right to check whether it is the same.
Again we compare the second symbol from left with the second symbol from right. We repeat
this process for all the symbols. If we found any symbol not matching, we cannot lead the
machine to HALT state.
Suppose the string is ababbabaΔ. The simulation for ababbabaΔ can be shown as follows:
Now, we will see how this Turing machine will work for ababbabaΔ. Initially, state is q0 and
head points to a as:
Go to HALT state
PROBLEM-3: Construct a TM machine for checking the palindrome of the string of odd
length.
Solution:
Firstly we read the first symbol from left and then we compare it with the first symbol from right
to check whether it is the same.
Again we compare the second symbol from left with the second symbol from right. We repeat
this process for all the symbols. If we found any symbol not matching, we lead the machine to
HALT state.
Suppose the string is 00100Δ. The simulation for 00100Δ can be shown as follows:
Now, we will see how this Turing machine will work for 00100Δ. Initially, state is q0 and head
points to 0 as:
Moved right up to Δ
Move left
PROBLEM-4: Construct TM for the addition function for the unary number system.
Solution:
The unary number is made up of only one character, i.e. The number 5 can be written in unary
number system as 11111. In this TM, we are going to perform the addition of two unary
numbers.
For example
2+3
i.e. 11 + 111 = 11111
If you observe this process of addition, you will find the resemblance with string concatenation
function.
In this case, we simply replace + by 1 and move ahead right for searching end of the string we
will convert last 1 to Δ.
Input: 3+2
Convert 1 to Δ
Thus the tape now consists of the addition of two unary numbers.
Here, we are implementing the function of f(a + b) = c. We assume a and b both are non zero
elements.
Solution: Here we have certain assumptions as the first number is greater than the second one.
Let us assume that a = 3, b = 2, so the input tape will be:
We will move right to - symbol as perform reduction of a number of 1's from the first number.
Let us look at the simulation for understanding the logic:
The universal Turing machine is a multi tape TM, the universal language Lu with a set of binary
integers which can be modeled by Turing machine. The universal language can be represented
by a pair(M, w) where M is a TM that accepts this languages and w is a binary string in (0+1)*
such that w belongs to L(M).
FINITE
CONTROL
INPUT M w
TAPE 0000010001001
STATE
A universal Turing machine is a multi tape TM, which contains a finite control with 3
tapes.First tape represents the transitions of Turing machine.Second tape represents the input
string which is encoded in binary form,Third tape represents states of a TM.
UNIT-V
UNDECIDABILITY
A problem is said to be decidable if there exists some algorithm which solves the
problem. If no such algorithm exists then the problem is said to be undecidable. Generally if the
problem is recognized by a Turing machine then it is decidable problem otherwise they are
called undecidable problems.
In this section we will discuss all the undecidable problems regarding Turing machine.
The reduction is used to prove whether given language is decidable or not. Let us understand the
concept of reduction first.
5.2.1 Reduction:
Proof:
i) Consider an instance w of p1. Then construct an algorithm such that the algorithm w as
input and converts it into another instance x of p1. Then apply that algorithm to check whether x
is in p2, if algorithm answers “yes” then that means x is in p2, similarly we can also say that w is
in p1. Since we have obtained p2 after reduction of p1. Similarly if algorithm answers “no” then
x is not in p2 that also means w is not in p1. This proves that if p1 is undecidable then p1 is also
undecidable.
ii) we assume that p1 is non-RE but p2 is RE. now construct an algorithm to reduce p1 to
p2, but by this algorithm p2 will be recognized that means there will be a turing machine that
says “yes” if the input is p2 but may or may not halt for the input which is not in p2. As we know
that one can convert instance of w in p1 to an instance x in p2. Then apply a TM to check
whether x is in p2. If x is accepted that also means w is accepted. This procedure describes a TM
whose language is p1 if w is in p1 then x is also in p2 and if w is not in p1then x is also not in p2.
This proves that if p1 is non-RE then p2 is also non-RE.
i)Halt: The machine starting at this configuration will halt after a finite number of states.
ii) No Halt: The machine starting at this configuration never reaches a halt state, no
matter how long it runs.
Now the question arises based on these two observations: Given any functional matrix, input
data tape and initial configuration, then it is possible to determine whether the process will ever
halt? This is called halting problem. That means we are asking for a procedure which enable us
to solve the halting problem for every pair (machine, tape).
The answer is “no” that is the halting problem is unsolvable. Now we will prove why it is
unsolvable. Let, there exists a TM, M1 which decides whether or not any computation by a TM
Twill ever halt when a description dT of T and tape t of T is given[that means the input to
machine M1will be (machine, tape) pair]. Then for every input (t, dT) to M1 if T halt for input
t,M1 also halts which is called accept halt. Similarly if T does not halt for input t then the M1 will
halt for input t then the M1 will halt which is called reject halt, as shown in fig.
M1
Accept halt
(t, dT)
Input
Reject halt
In this concept we will discuss about the undecidability of strings and not of Turing
machines. The undecidability of strings is determined with the help of post’s correspondence
problem (PCP).
“Posts correspondence problem consists of two lists of strings that are of equal length
over the input ∑. The two lists are A= w1,w2,w3, - - - - -,wn and B= x1,x2,x3,- - - - -,xn then there
exists a non empty set of integers i1,i2,i3,- - - ,in such that w1,w2,w3, - - - - -,wn = x1,x2,x3,- - - - -,xn”.
A= (1, 0, 010, 11) and B = (10, 10, 01, 1). The input set is ∑ = {0, 1}. Find the solution.
W1 W2 W1 W3 W3 W4
Left to Right
1 0 1 010 010 11
X1 X2 X1 X3 X3 X4
10 10 10 01 01 1
Left to Right
Example 5.3.2: obtain the solution for the following correspondence system.
A={ba, ab, a, baa,b),B={bab, baa, ba, a, aba}. The input set is{a,b}
Solution: To obtain the corresponding sytem the one sequence can be chosen. Hence we get
If the Turing machine M has some polynomial p(n) and the machine M never makes more
than p(n) moves when an input of length n is given to M then such Turing machine is said to be
polynomial time TM.
The class P is set of languages is accepted by polynomial time and Turing machine.
However, the class P is also a set of problems that can be solved by real computer in polynomial
time there are many such problems.
for eg: Transitive closure, Matrix Multiplication, Kruskals Algorithm for minimum spanning
tree.
In this algorithm the minimum weight is obtained. In this algorithm also the circuit should
not be formed. Each time the edge of minimum weight has to be selected, from the graph. It is
not necessary in this algorithm to have edges of minimum weights to be adjacent.
If the non deterministic Turing machine M has some polynomial P(n) and the machine M never
makes more than P(n) moves when and input of length n is given to M then such a Turing
machine is said to be polynomial time NTM. The class NP is set of languages accepted by
polynomial-time non deterministic Turing machine. There are many problems in NP which are
not in P if we can’t resolve the P=NP question, we can at least demonstrate that certain problems
in NP are hardest”, in the sense that if any one of them were in P, then P=NP. Such problems are
also called as NP complete problems.
Given a set of cities and a cost to travel between each pair of cities, determine whether
there is a path that visits every city once and returns to the first city, such that the cost travelled is
less.
This problem is a NP problem as there may exist some path with shortest distance
between the cities. If you get the solution by applying certain algorithm then travelling salesman
problem is NP complete problem. If we get no solution at all by applying an algorithm then that
travelling salesman problem belongs to NP hard class. b
a c
d e f
Rice theorem states that any non-trivial semantic property of a language which is
recognized by a Turing machine is undecidable. A property, P, is the language of all Turing
machines that satisfy that property.
Formal Definition
A non-trivial property is satisfied by some recursively enumerable languages and are not
satisfied by others. Formally speaking, in a non-trivial property, where L ∈ P, both the following
properties hold:
Property 1 − There exists Turing Machines, M1 and M2 that recognize the same language, i.e.
either ( <M1>, <M2> ∈ L ) or ( <M1>,<M2> ∉ L )
Property 2 − There exists Turing Machines M1 and M2, where M1 recognizes the language
while M2 does not, i.e. <M1> ∈ L and <M2> ∉ L
Proof
Since, P is non-trivial, at least one language satisfies P, i.e., L(M0) ∈ P , ∋ Turing Machine M0.
On input x
• Run M on w
• If M does not accept (or doesn't halt), then do not accept x (or do not halt)
A function that maps an instance ATM = {<M,w>| M accepts input w} to a N such that
• If M accepts w and N accepts the same language as M0, Then L(M) = L(M0) ∈ p
ASSIGNMENT QUESTIONS
UNIT-1
1. Conclude what type of strings will be accepted by the below Finite automata as shown
in figure
2. (a) Design a DFA, M that accepts the language. L(M) = {w/w ε{a,b} * } and w does not
contain 3 consecutive b’s
(b) Design DFA to accept strings with c and d such that number d’s are divisible by 4
(c) Design DFA which accepts language L ={ 0,000,00000,........} over {0}.
3. Out of the following languages, which are/is accepted by given FA and explain as shown
in figure 1.
6. Design a Mealy machine that uses its state to remember the last symbol read and emits
output ‘y’ whenever current input matches to previous one, and emits n otherwise.
b)For the following NFA with ε -moves convert it in to an NFA with out ε -moves and
show that
NFA with ε -moves accepts the same language as shown in figure
8. (a) Design a Moore machine to determine the residue mod 5 for each ternary string (base
3)
treated as ternary integer.
(b) Convert the following Mealy machine into equivalent Moore machine as shown in
figure
10. (a) Construct a minimal state finite automaton to the following state diagram:
(b) Design a Moore machine and Mealy machine that accepts strings over $ = {0, 1}
where, if the input ends in 001, output a A; if the input ends in 100 , output a B;
else output a C.
11. Conversion of Moore to Mealy Machine
12. Conversion of Mealy to Moore Machine
UNIT-II
1. Find a Regular expression corresponding to each of the following subsets over {0,1}*.
(a) The set of all strings containing no three consecutive 0’s.
(b) The set of all strings where the 10th symbol from right end is a 1.
(c) The set of all strings over {0,1} having even number of 0’s & odd no. of 1’s.
(d) The set of all strings over {0,1} in which the number of occurrences of is
divisible by 3.
2. Construct an NFA for the following:
(a) R=01[((10)*+111)*+0]*1
(b) ((01+10)*00)*
3. Construct a DFA accepting language represented by 0*1*2*.
4. Is FA is possible for the following language L = {anbn |n >= 1 } if not why? Explain?
5. (a) When are two regular expressions said to be equivalent? Explain.
(b) Find the regular expression for the following finite automaton:
.
6. Construct finite automation to accept the regular expression (0+1*) (00+11) (0+1)*.
7. Construct the NFA for the regular expression,
r = 0*1( (0+1) 0*1)* (Ε+(0+1)(00)*)+0(00)*.
8. Give a DFA for accepting = {anbn / abs(n -m) mod 3 <=1} show that L is non-regular.
9. Construct an NFA and DFA for the regular expression: (0 + 1)*(00 + 11) 110.
10. Find the regular expression for the following finite automaton:
UNIT- III
Find the leftmost derivation, rightmost derivation, and parse tree for the string:
a*(a+b00).
4. (a) Construct a context free grammar for generating the balanced parentheses, like ( ),[
], [( ) ( ) ], ( [ ] ), etc. and find the moves of the grammar to derive the string: ( [ ( ) ] ( )
)
(b) Draw the parse tree for the production grammar: S (S) |SכS |~S |i |j, Generating
the symbolic formula: (~ ~ i ( כi ~ ~ כj ) ).
5. Explain Chomsky hierarchy.
6. Prove that the following language is not context-free language
L = {www |wε{a, b}*} is not context free.
16. Find the PDA with only one state that accepts the language {ambn : n > m }.
17. Construct the PDA that recognizes the languages L={x=×R: x є {a,b}+}.
18. Define a PDA. Design a PDA for L {xcxr / x є {a,b}*}process the string “abbacabba”.
Note: xr stands for reverse of the string x.
19. What do you mean by an instantaneous description of a PDA. Explain with example.
20. (a) when do we say that PDA is nondeterministic? Design a PDA for recognizing the
language of palindromes over the input alphabet {a,b}.
(b) Distinguish between a DPDA and NPDA
UNIT-IV
1. Define a Turing machine mathematically. Define the term ‘move’ in a TM.
2. Design a TM that recognizes the set {0n1n >=|n = 1 }
3. Design a Turing Machine that accepts the set of all even palindromes over{0,1}.
4. Give a Turing machine for the following:
(a) That computes ones complement of a binary number
(b) That shifts the input string, over the alphabet (0,1) by one position right by
inserting ‘#′as the first character.
5. Explain briefly about the different types of Turing Machines.
6. Design a Turing Machine (TM) that accepts the language, L = { 0n1n0n| n >=1 }
7. Consider the following transition table (States versus Tape symbols) of a Turing
Machine, M:
UNIT-V
TUTORIAL QUESTIONS
UNIT-I
4. (a) Compare NFA and DFA with the help of suitable examples.
(b) Construct a DFA that accepts an identifier of a ‘C’ programming language.
(c) Consider the following finite automaton that recognizes a set of strings of length 6.
What is the total number of strings in the set? Explain.
(c) Obtain the number of states in the minimized machine of the following state table of
FSM: PS NS, OUTPUT
I/P=0 I/P=1
A D, 0 A, 0
B A, 0 A, 1
C B, 1 C, 1
D B, 1 C, 1
7. Define NFA mathematically. Explain its significance and the functioning. Convert the
given finite automation into deterministic equivalent. Explain method Taking suitable
exa
mp
le
pro
ve
both accept the sane string:
8. (a) Design a DFA that accepts the language over the alphabet, å = {0, 1, 2} where the
decimal equivalent of
the language is divisible not by 3.
(b) Design a Moore machine and Mealy machine that accepts strings over $ = {0, 1}
where, if the input ends
in 001, output a A; if the input ends in 100 , output a B; else output a C.
9. Design a Moore Machine to determine the residue mod 4 for each binary string
treated as integer.
10. Construct Minimum state Automata for the following DFA? (* denotes final state)
UNIT- II
UNIT- III
1. Find the left most and right most derivations for the word abba in the gram mar
S ->AA
A->aB
B->bB/a/b
2. Obtain a CFG to obtain balanced set of parentheses.(i.e every left parentheses
should match with the corresponding right parentheses).
3. Obtain a Left Linear Grammar for the DFA as shown in figure
11. Define PDA. In what ways a PDA can show the acceptance of a string. Explain
with examples
12. Define Deterministic pushdown automata. Explain with an example
13. Construct the PDA M for the language L={wwR/wє{a, b}_} such that L=L(M).
14. Construct a PDA for the following language. {ambmcn/m, n> =1}.
15. Construct a PDA to accept strings with 0’s and 1’s such that number of 0’s are
greater than number 1’s.
16. Show that the languages
(a) L1={0n1m/n = m and n>= 1} (b) L2={0n1m / n=2m
and n>= 1}
are deterministic context free languages?
17. Given L={ambn/n < m}. Derive (a) a CFG that accepts L.
(b) a PDA accepting L by empty stack. (c) a PDA accepting L by
final state.
18. What do you mean by an instantaneous description of a PDA. Explain with
example.
19. Define a PDA. Design a PDA for L {xcxr / x є {a,b}*}process the string
“abbacabba”.
20. Construct the PDA corresponding to the grammar:
S->aABB/aAA A->aBB/a B->bBB/A
UNIT- IV
1. Let T be the Turing machine defined by the five tuples:
(q0, 0, q1, 1,R), (q0, 1, q1, 0, r), (q0,B, q1, 0,R),
(q1, 0, q21, L), (q1, 1, q1, 0, r)(q1,B, q2, 0, L).
for each of the following initial tapes, determine the final tape when T halts,
assuming that T begins in initial position.
2. Design a Turing machine to add two given integers.
3. Design a TM that can compute proper subtraction i.e, m-n if m>n and 0 if m<n.
4. Define Turing machine formally. Explain how Turing machine can be used to
compute integer functions?
5. Design the Turing machine to compute following function, Show its transition
diagram also f(x)=x2 where x is integer represented in unary.
6. Define a Turing machine (TM) and the language accepted by a TM. Design a TM
for recognizing the language (a+b)* aba (a+b)*. Draw it’s transition diagram and
table using the instantaneous description notation. Process the string “aabaabaaab”.
7. Design a TM for recognizing the language of palindromes over the input alphabet
{a,b}show the moves of TM for the string “abbbba”.
8. Design a Turing Machine, M that accepts the set of strings with an equal number
of 0’s and 1’s.
9. Design a Turing Machine (TM) that accepts the language, L = { 0n1n0n| n >=1 }
10. Design a T.M for copying of information from one place to the other place.
Assume all the necessary. assumptions. Give Example of the working of your T.M.
UNIT- V
Unit I
Unit-II
Unit III
A->BC|b
B->b|İC
C->Đ|İD
D->d
4. State the symbol is used to label the in teriornode of the parse tree.
5. Define the language of PDA accepted by final state.
6. List the steps to convert CFG to PDA.
7. Define CNF.
8. Define PDA.
9. Define NPDA.
10. Differentiate between deterministic and non deterministic PDA.
Unit IV
UNIT-V
Unit I
1. Define language over an alphabet with examples. Write a DFA to accept set of all string
sending with010.
2. Give example for Minimize the DFA.
3. ConstructaMooremachinetoacceptthefollowinglanguage.L={w|wmod3=0}on∑={ 0,1,2}
4. Write any four differences between DFA and NFA
5. Convert NFA with Ɛ to NFA with an example.
6. Construct NFA for (0 +1)*(00+11)(0+1)*and Convert to DFA.
7. Construct NFA for (0+1)*(00+11)(0+1)*and Draw the transition table and transition
diagram and example strings.
UNIT-II
1. ConvertRegularExpression01*+1toFiniteAutomata.
2. Convert given Finite Automat to Regular Expression using Arden’s theorem.
3. ConvertgivenFiniteAutomattoRegularExpressionusingstandardmethod(RKmethod)ij
4. Explain Identity rules. Give an example using the identity rules for the simplification.
5. Construct Regular grammar for the given Finite Automata.
6. Construct a Transition System M accepting L (G) for a given Regular Grammar G.
7. Discuss the properties of Context free Language. Explain the pumping lemma with an
example.
Unit III
1. Write a short notes on Chomsky Normal Form and Greibach Normal Form
2. Showthatthefollowinggrammarisambiguouswithrespecttothestringaaabbabbba.
S aB|bAA aS|bAA|aB bS|aBB|b
3. Illustrate the construction of Greibach normal form with an example.
4. Show that the following CFG ambiguous.
S iCtS|iCtSeS|a
C b
5. WritetheproceduretoconvertCFGtoPDAandalsoconvertthefollowingCFGtoPDA
S B|aAA
A aBB|a
B bBB|A
C a
6. Illustrate the construction of Greibach normal form with an example.
7. Discuss the Pumping lemma for Context Free Languages concept with example
{anbncnwhere n>0}
8. Write the procedure to convert CFG to PDA and also convert the following CFG to PDA.
S B|aAA
A aBB|a
B bBB|A
C a
Unit IV
1. Define a Turing Machine. With a neat diagram explain the working of a Turing Machine.
2. Construct a Turing Machine which shifts non block symbols 3 cells to the right.
5. L={w#wR| wϵ(a+b)*}
6. Write short notes on Recursive and Recursively Enumerable languages?
7. Write the properties of recursive and recursively enumerable languages
Unit V
I A B
1 11 111
2 100 001
3 111 11
Objective Questions
1 Which of the following statements is (are) correct ? D
If a language and its complement are both regular, the language is recursive
All of these
product
union
complementation
kleen star
S --> Sa | b
S --> aSb | ab
None of these
None of these
None of these
9 The CFG B
s---> as | bs | a | b
(a + b)
(a + b) (a + b)*
(a + b) (a + b)
None of these
Abc
Aab
abcc
abbb
11 B
None of these
12 The language of all words with at least 2 a's can be described by the regular D
expression
b* ab* a (a + b)*
all of these
15 Set of regular languages over a given alphabet set is not closed under D
Union
Complementation
Intersection
Any grammar
Only CG
15 B
Regular, Regular
None of these
context free
17 Given A = (0,1) and L = A*. If R = (0n 1n, n > 0) , then language L ∪ R and R are D
respectively
Regular,regular
18 B
Let L {w/w is noempty and has an equal number of 0’s and 1’s)
set of all binary strings with unequal number of 0’s and 1’s
set of all binary strings with exactly one more 0’s than the number of 1’s or 1
more than the number of 0’s
none of these
19 If L1 and L2 are context free language and R a regular set, then which one of the B
languages below is not necessarily a context free language?
L1 L2
L1 ∩ L2
L1 ∩ R
L1 ∪ L2
Context free
Regular
Context sensitive
LR(k)
21 What can be said about a regular language L over {a} whose minimal finite state A
automation has two states?
L must be { an | n is even}
L must be { an | n is odd}
22 Given the language L = {ab, aa, baa}, which of the following strings are in L*? C
1) abaabaaabaa
2) aaaabaaaa
3) baaaaabaaaab
4) baaaaabaa
1,2 and 3
2,3 and 4
1,2 and 4
1,3 and 4
zero
One or more
Two or more
None of these
DFA
NDFA
PDA
All of these
1,2,3,4
1,2
2,3,4
3,4
None of these
O( f 2)
o( f 2)
o( f 2)
O(h2)
Regular
Not regular
Uncertain
None of these
29 Consider an alternate Turing machine model, in which there is an input tape on which A
the tape head can move in both directions but cannot write, and one or more work
tapes, one of which serves as an output tape. For a function f, denoted by DSpace ( f )
, the set of languages that can be recognized by a Turning machine of this type which
uses no more than f(n) squares on any work tape for any input string of length n. The
only restriction we need to make on f is that f(n) > 0 for every n. The language of
balanced strings of parentheses are in
DSpace (1+ ⌈log2 (n + 1 ⌉). (⌈ x ⌉) means the smallest integer greater than or equal to
x.
none of these
Determining of a universal turing machine can be written for fewer than k instructions
for some k
Carnot's function
Ricmann function
Bounded function
Ackermann's function
32 Turing machine (TM) is more powerful than FMS (Finite State Machine) because C
none of these
L ∈Time (f)
L ∈ Time(cf)
L ∈ Time (h)
none of these
Space(s)
F(n)
Time(f)
Time(h)
Both an algorithm that answers either 'Yes' or 'No' and a Turing machine that decides
the problem are correct.
Allan Turing
Emil Post
Kurt Godel
None of them
sting
None of them
39 If L1 and L2 are two regular languages then the problem “Is L1 ⨁ L2 (L1 Ex-OR L2) A
regular” is
decidable
undecidable
regular
Decidable always
Undecidable always
41 Let G1 and G2 are two grammars and G1 is regular grammar. The problem L(G1) = C
L(G2) or not is decidable when
G2 is type of 0 grammar
G2 is regualr grammar
Undecidable
Decidable
Dependent or nature of L1
Partially decidable
Undecidable
Decidable
Partially decidable
Undecidable
Decidable
Dependent or nature of L1
Partially decidable
Decidable
Undecidable
47 If a language L and its complement (∼L) are recursively enumerable then select B
statement
None of them
If language L and its complement (∼L) are recursively enumerable then L and ∼L are
recursive
49 If there is a Turing machine M for a problem set P. M is applied to any problem in the C
set P and terminates when answer is “Yes” and may or may not terminate otherwise
then P is
Solvable
Unsolvable
Partially Solvable
None of these
False
True
None of these
X is decidable
52 Given Turing machine M, produce a Turing machine M' such that M' halts and C
accepts x if M loops on x and M' loops otherwise.
Since we can't know if M will halt, we can't know if M' will halt.
Since we can't know if M' will halt, we can't know if M will halt.
53 Given M that takes no input and either accepts or loops, produce M' that enumerates A
primes if M accepts and enumerates ∅∅ if M loops.
Finite automata
PDA
None of these
Transition table
Transition diagram
Instantaneous description
All of these
All of these
None of these
None of these
All of these
Regular
Context-sensitive
Context-sensitive
65 Following grammar A
S-> bS
S -> b
S -> aA
A -> bA
Type -3 grammar
Type -2 grammar
Type -1 grammar
Type -0 grammar
Can
Can not
May
May not
67 What is the reason behind a Turing machine is more powerful than finite state C
machine FSM?
Turing machine head moment is in both directions i.e. left moment and right moment
as well.
Turing machine has capability remember arbitrary long sequence of input string
Turing Machine
PDA
Post machine
69 If a Turing machine halts for each and every world of a language L and rejects other, C
then L is said to be
Recursive enumerable
Recursive
None of these
None of them
71 The difference between a read-only Turing machine and a two-way finite state C
machine is
Head movement
Finite control
Storage capacity
power
None of them
74 Four pairs are following; in each pair both objects have some common thing. Choose D
the odd pair;
TM,2PDA
(Computer, UTM)
(2PDA, nPDA)
(FA,PDA)
Computer system
Software
Hardware
All of them
Regular Grammar
Context-sensitive Grammar
None of these
type 0
type 1
type 2
type 3
Both B & C
80 The family of recursive languages is not closed under which of the following D
operations
Complementation
Union
Intersection
None of these
If a language and its complement are regular, then the language must be recursive
None of these
Recursive
Recursively enumerable
None of these
B is subset of A
A is subset of B
Recursive enumerable
Recursive
Not recursive
None
86 Which one of the following properties of recursively enumerable set is not recursively B
enumerable
L-Lu≠ᶲ
L≠ᶲ
None
L≠ᶲ
L-Lu≠ᶲ
None
88 Universal language is A
Recursive
Decidable
Non-recursively enumerable
recursively enumerable
Can’t say
Recursive
Non recursive
none
Can’t say
Decidable
Undecidable
None
If the emptiness problem is decidable Type 3 grammars, then it is also decidable for
type 0 grammars
If the emptiness problem is decidable Type 0 grammars, then it is also decidable for
type3 grammars
94 The problem of determining that a Turing machine would halt after giving a Yes/No B
output is
Decidable
Unsolvable
Solvable
Can’t say
Decidable
Undecidable
None
96 What can you say about the membership problem For type 0 grammars D
Partially decidable
Decidable
UnDecidable
None
Regular
Context free
Context sensitive
Recursive
Regular
Context free
Context sensitive
None
1,2,3,4,
1,2
2,3,4
3,4
Regular language
Both
None of these
Regular language
None of these
O( f 2)
o( f 2)
O( h 2)
O( h)
105 If T is a TM recognizing L, and T reads every symbol in the input string, τT(n) ≥ 2n + C
Regular
Not regular
Uncertain
None
Determining of a universal turing machine can be written for fewer than k instructions
for some k
computability
all of these
true
false
both
None
112 If L can be recognized by a TM T with a doubly infinite tape, and τt = f, then L can be A
recognized by an ordinary TM with time complexity
O(f)
o(f)
O(h)
o(f)
113 Consider an undirected graph G with 100 nodes. The maximum number of edges to be C
included in G so that the graph is not connected is
5689
4931
4851
4321
114 Consider three decision problems P1, P2 and P3. It is known that P1 is decidable C
and P2 is undecidable. Which one of the following is TRUE?
P3 is decidable if P1 is reducible to P3
P3 is undecidable if P3 is reducible to P2
P3 is undecidable if P2 is reducible to P3
L1 ∩ L2 is a CFL
L1 ∪ L2 is a CFL
L1 ∪ L2 is inherently ambiguous
L1 ∩ L2 is a CSL
L2 = {an + m bn + m cm|n, m ≥ 0}
L3 = {an + m bn + m cm + n|n, m ≥ 0}
L1 only
L3 only
L1 and L2
L3 and L2
117 Which one of the following is the strongest correct statement about a finite D
language L over a finite alphabet Σ?
L is undecidable
L is recursive
L is CSL
L is regular set
NP-complete = NP
NP-complete ᴖP = ᶲ
NP-hard = NP
P = NP-complete
121 Let S be an NP-complete problem and Q and R be two other problems not known to B
be in NP. Q is polynomial time reducible to S and S is polynomial-time reducible to
R. Which one of the following statements is true? (GATE CS 2006)
R is NP-complete
R is NP-hard
Q is NP-complete
Q is NP- hard
122 Let X be a problem that belongs to the class NP. Then which one of the following is C
TRUE?
X may be undecidable
(1) The problem of determining whether there exists a cycle in an undirected graph is
in P.
(2) The problem of determining whether there exists a cycle in an undirected graph is
in NP.
1,3
1,2,3
1,2
2,3
124 Which of the following is true about NP-Complete and NP-Hard problems.
The first problem that was proved as NP-complete was the circuit satisfiability
problem.
None of these
MID-I
PART-A
MID-II
PART-A
PART-B
1(a) Design the PDA for the language that accepts strings with na (w) < nb (w) where w∊(a+b)*
(b) Let G be CFG that generates the set of palindromes given by S→ aSa / bSb / a / b.
Find the PDA that accepts L(G)
or
2. Define and give example for CNF, GNF, and Pumping Lemma for CFL
3. Construct a Turing Machine for checking the palindrome of string of even length.
or
4. Explain about the Context Sensitive Language and Design a LBA for the language
L={an bn cn / n≥1}
5(a) Explain Chomsky Hierarchy of Languages?
(b) Obtain the solution for the following system of post correspondence problem
A={100,0,1} B={1,100,00}
or
6.(a). Explain about Universal Turing Machine?
(b). Explain the classes of P and NP problems?
REFERENCES
1. Yan, Song Y. (1998). An Introduction to Formal Languages and Machine Computation.
Singapore: World Scientific Publishing Co. Pte. Ltd. pp. 155–156.
2. Jump up^ Chakraborty, P., Saxena, P. C., Katti, C. P. 2011. Fifty Years of Automata
Simulation: A Review. ACM Inroads, 2(4):59–
70. http://dl.acm.org/citation.cfm?id=2038893&dl=ACM&coll=DL&CFID=65021406&CFTOK
EN=86634854
3. Jump up^ Jirí Adámek and Vera Trnková. 1990. Automata and Algebras in Categories.
Kluwer Academic Publishers:Dordrecht and Prague
4. Jump up^ S. Mac Lane, Categories for the Working Mathematician, Springer, New
York (1971)
5. Jump up^ Cartesian closed category Archived November 16, 2011, at the Wayback
Machine.
6. Jump up^ The Category of Automata Archived September 15, 2011, at the Wayback
Machine.
8. Jump up^ Aguiar, M. and Mahajan, S.2010. "Monoidal Functors, Species, and Hopf
Algebras".
9. Jump up^ Meseguer, J., Montanari, U.: 1990 Petri nets are monoids. Information and
Computation 88:105–155
10. John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman (2000). Introduction to Automata
Theory, Languages, and Computation (2nd Edition). Pearson Education. ISBN 0-201-44124-1.
12. Elaine Rich (2008). Automata, Computability and Complexity: Theory and Applications.
Pearson. ISBN 0-13-228806-0.
13. Salomaa, Arto (1985). Computation and automata. Encyclopedia of Mathematics and Its
Applications. 25. Cambridge University Press. ISBN 0-521-30245-5. Zbl 0565.68046.
14. Anderson, James A. (2006). Automata theory with modern applications. With
contributions by Tom Head. Cambridge: Cambridge University Press. ISBN 0-521-61324-
8. Zbl 1127.68049.
15. Conway, J.H. (1971). Regular algebra and finite machines. Chapman and Hall
Mathematics Series. London: Chapman & Hall. Zbl 0231.94041.
16. Sakarovitch, Jacques (2009). Elements of automata theory. Translated from the French
by Reuben Thomas. Cambridge University Press. ISBN 978-0-521-84425-3. Zbl 1188.68177.