ATC Notes Module 5
ATC Notes Module 5
ATC Notes Module 5
MODULE 5
9.7 VARIANTS OF TURING MACHINES
The Turing machine we have introduced has a single tape. δ(q, a) is either a single triple
(p, y, D), where D = R or L, or is not defined. We introduce two new models of TM:
(i) TM with more than one tape- called as multitape TM, and
(ii)TM where δ(q, a) ={(p1, y1, D1), (p2, y2, D2), •••• (pr , yr , Dr)} – called as nondeterministic TM.
There are k tapes, each divided into cells. The first tape holds the input string w.
Initially, all the other tapes hold the blank symbol.
Initially the head of the first tape (input tape) is at the left end of the input w. All
the other heads can be placed at any cell initially.
δ is a partial function from Q x Гk into Q x Гk x {L, R, S}k.
A move depends on the current state and k tape symbols under k tape heads.
In a typical move:
(i) M enters a new state.
(ii) On each tape, a new symbol is written in the cell under the head.
(iii) Each tape head moves to the left or right or remains stationary. The heads
move independently: some move to the left, some to the right and the remaining
heads do not move.
The initial ID has the initial state qo, the input string w in the first tape (input
tape), empty strings of b's in the remaining k - 1 tapes. An accepting ID has a final
state, some strings in each of the k tapes.
Initially the contents of tapes 1 and 2 of M are stored in the second and fourth tracks of
MI' The head markers of the first and third tracks are at the cells containing the first
symbol.
To simulate a move of M, the 2k-track TM M1 has to visit the two head markers and
store the scanned symbols in its control. Keeping track of the head markers visited and
those to be visited is achieved by keeping a count and storing it in the finite control of
M1' Note that the finite control of M1 has also the information about the states of M and
its moves. After visiting both head markers. M1 knows the tape symbols being scanned
by the two heads of M.
M1 revisits each of the head markers:
(i) It changes the tape symbol in the corresponding track of M1 based on the
information regarding the move of M corresponding to the state (of M) and the tape
symbol in the corresponding tape M.
(ii) It moves the head markers to the left or right.
(iii) M1 changes the state of M in its control.
This is the simulation of a single move of M.
2. If the state q say in the current ID xqay, is not an accepting state of M1 and δ(q,
a) has k triples, M1 copies the ID xqay in the second tape and makes k copies of
this ID at the end of the sequence of IDs in tape 2.
3. M1 modifies these k IDs in tape 2 according to the k choices given by δ(q, a).
4. M1 returns to the marked current ID. erases the mark x and marks the next ID-
separator * with x. Then M1 goes back to step 1.
In this section we define the model of linear bounded automaton and develop the relation
between the linear bounded automata and context-sensitive languages. It should be
noted that the study of context-sensitive languages is important from practical point of
view because many compiler languages lie between context-sensitive and context-free
languages.
All the symbols have the same meaning as in the basic model of Turing machines with
the difference that the input alphabet L contains two special symbols ¢ and $. ¢ is called
the left-end marker which is entered in the leftmost cell of the input tape and prevents
the R/W head from getting off the left end of the tape. $ is called the right-end marker
which is entered in the rightmost cell of the input tape and prevents the R/W head from
getting off the right end of the tape. Both the end markers should not appear on any
other cell within the input tape, and the RIW head should not print any other symbol
over both the end markers.
Let us consider the input string w with |w|= n-2. The input string w can be recognized
by an LBA if it can also be recognized by a Turing machine using no more than kn cells
of input tape, where k is a constant specified in the description of LBA. The value of k
does not depend on the input string but is purely a property of the machine. Whenever
we process any string in LBA, we shall assume that the input string is enclosed within
the end markers ¢ and $.
The above model of LBA can be represented by the block diagram of Fig. 9.11.
There are two tapes: one is called the input tape, and the other, working tape.
On the input tape the head never prints and never moves to the left. On the working
tape the head can modify the contents in any way, without any restriction.
In the case of LEA, an ID is denoted by (q, w. k), where q € Q. w € Г and k is some integer
between 1 and n. The transition of IDs is similar except that k changes to k - 1 if the RIW
head moves to the left and to k + 1 if the head moves to the right.
CHAPTER 10
Hilbert's tenth problem was to devise 'a process according to which it can be determined
by a finite number of operations', whether a polynomial over Z has an integral root. This
was not answered until 1970.
The formal definition of algorithm emerged after the works of Alan Turing and Alanzo
Church in 1936. The Church-Turing thesis states that any algorithmic procedure
that can be carried out by a human or a computer, can also be carried out by a
Turing machine. Thus the Turing machine arose as an ideal theoretical model for an
algorithm. The Turing machine provided machinery to mathematicians for attacking the
Hilberts' tenth problem, The problem can be restated as follows: does there exist a TM
that can accept a polynomial over n variables if it has an integral root and reject the
polynomial if it does not have one, In 1970, Yuri Matijasevic. after studying the work of
Martin Davis, Hilary Putnam and Julia Robinson showed that no such algorithm (Turing
machine) exists for testing whether a polynomial over n variables has integral roots. Now
10.2 DECIDABILITY
Now these terms are also defined using Turing machines. When a Turing machine
reaches a final state, it halts.
We can also say that a Turing machine M halts when M reaches a state q and a
current symbol ‘a’ to be scanned so that δ(q, a) is undefined.
There are TMs that never halt on some inputs in any one of these ways, So we
make a distinction between the languages accepted by a TM that halts on all input
strings and a TM that never halts on some input strings.
Definition 10.3 A problem with two answers (Yes/No) is decidable if the corresponding
language is recursive. In this case, the language L is also called decidable.
Definition 10.4 A problem/language is undecidable if it is not decidable.
Proof: To prove the theorem, we have to construct a TM that always halts and also
accepts ADFA We describe the TM M using high level description. Note that a DFM B
always ends in some state of B after n transitions for an input string of length n.
We define a TM M as follows:
1. Let B be a DFA and w an input string. (B, w) is an input for the Turing machine M.
2. Simulate B and input H' in the TM M.
3. If the simulation ends in an accepting state of B. then M accepts w.
If it ends in a non accepting state of B, then M rejects w.
We can discuss a few implementation details regarding steps 1. 2 and 3 above. The input
(B, w) for M is represented by representing the five components Q, ∑, δ, qo, f by strings
of ∑* and input string W € ∑*. M checks whether (B, w) is a valid input. If not, it rejects
(B, w) and halts. If (B, w) is a valid input, M writes the initial state qo and the leftmost
input symbol of w. It updates the state using 0 and then reads the next symbol in w.
This explains step 2.
If the simulation ends in an accepting state w then M accepts (B, w). Otherwise, M rejects
(B, w). This is the description of step 3. It is evident that M accepts (B,w) if and only if w
is accepted by the DFA B.
Proof: We convert a CFG into Chomsky normal form. Then any derivation of w of length
k requires 2k - 1 steps if the grammar is in CNF. So for checking whether the input
string w of length k is in L(G), it is enough to check derivations in 2k - 1 steps. We know
that there are only finitely many derivations in 2k - 1 steps. Now we design a TM M that
halts as follows.
1. Let G be a CFG in Chomsky normal form and w an input string. (G, w) is an input for
M.
2. If k = 0, list all the single-step derivations. If k ≠ 0, list all the derivations with 2k - 1
steps.
3. If any of the derivations in step 2 generates the given string w, M accepts (G, w).
Otherwise M rejects.
(G, w) is represented by representing the four components VN, ∑, P, S of G and input
string w. The next step of the derivation is got by the production to be applied.
M accepts (G, w) if and only if w is accepted by the CFG G.
In Theorem 4.3, we proved that a context-sensitive language is recursive. The main idea
of the proof of Theorem 4.3 was to construct a sequence {wo, w1 ...wk} of subsets of (V
U∑)*, that terminates after a finite number of iterations. The given string w € ∑* is in
L(G) if and only if w € wk. With this idea in mind we can prove the decidability of the
context sensitive language.
CHAPTER 12
COMPLEXITY
When a problem/language is decidable, it simply means that the problem is
computationally solvable in principle, It may not be solvable in practice in the sense that
it may require enormous amount of computation time and memory, In this chapter we
discuss the computational complexity of a problem, The proofs of
decidability/undecidability are quite rigorous, since they depend solely on the definition
of a Turing machine and rigorous mathematical techniques. But the proof and the
discussion in complexity theory rests on the assumption that P≠NP. The computer
scientists and mathematicians strongly believe that P≠NP. But this is still open.
This problem is one of the challenging problems of the 21st century. This problem carries
a prize money of $lM. P stands for the class of problems that can be solved by a
deterministic algorithm (i.e. by a Turing machine that halts) in polynomial time: NP
stands for the class of problems that can be solved by a nondeterministic algorithm (that
is, by a nondeterministic TM) in polynomial time; P stands for polynomial and NP for
nondeterministic polynomial. Another important class is the class of NP-complete
problems which is a subclass of NP.
Ex: Construct the time complexity T(n) for the Turing machine M ={0n1n:n>=1}
Ex: Find the running time for the Euclidean algorithm for evaluating gcd(a. b)
where a and b are positive integers expressed in binary representation.
In 1982. Richard Feynmann, a Nobel laurate in physics suggested that scientists should
start thinking of building computers based on the principles of quantum mechanics.
The subject of physics studies elementary objects and simple systems and the study
becomes more intersting when things are larger and more complicated. Quantum
computation and information based on the principles of Quantum Mechanics will
provide tools to fill up the gulf between the small and the relatively complex systems in
physics. In this section we provide a brief survey of quantum computation and
information and its impact on complexity theory.
Quantum mechanics arose in the early 1920s, when classical physics could not explain
everything even after adding ad hoc hypotheses. The rules of quantum mechanics were
simple but looked counterintuitive, and even Albert Einstein reconciled himself with
quantum mechanics only with a pinch of salt.
Recall the Church-Turing thesis which asserts that any algorithm that can be performed
on any computing machine can be performed on a Turing machine as well.
Miniaturization of chips has increased the power of the computer. The growth of
computer power is now described by Moore's law, which states that the computer power
will double for constant cost once in every two years. Now it is felt that a limit to this
doubling power will be reached in two or three decades, since the quantum effects will
begin to interfere in the functioning of electronic devices as they are made smaller and
smaller. So efforts are on to provide a theory of quantum computation which will
compensate for the possible failure of the Moore's law.
In mid-1970s, Robert Solovay and Volker Strassen gave a randomized algorithm for
testing the primality of a number. (A deterministic polynomial algorithm was given by
Manindra Agrawal. Neeraj Kayal and Nitein Saxena of IIT Kanpur in 2003.) This led to
the modification of the Church thesis.
Computers are physical objects, and computations are physical processes. What
computers can or cannot compute is determined by the law of physics alone, and not by
pure mathematics.
-David Deutsch
But it is not known whether Deutsch's notion of universal quantum computer will
efficiently simulate any physical process. In 1994, Peter Shor proved that finding the
prime factors of a composite number and the discrete logarithm problem (i.e. finding the
positive value of s such that b =as for the given positive integers a and b) could be solved
efficiently by a quantum computer. This may be a pointer to proving that quantum
computers are more efficient than Turing machines (and classical computers).
******************************************************************************