Theorem 4.6.1 Each Type 0 Language Is A Recursively Enumerable Language. Proof Consider Any Type 0 Grammar G - From G Construct A Two Auxiliary
Theorem 4.6.1 Each Type 0 Language Is A Recursively Enumerable Language. Proof Consider Any Type 0 Grammar G - From G Construct A Two Auxiliary
Theorem 4.6.1 Each Type 0 Language Is A Recursively Enumerable Language. Proof Consider Any Type 0 Grammar G - From G Construct A Two Auxiliary
hand were shown earlier to be the classes of Type 3 and Type 2 languages, respectively. The following two theorems show that the class of languages accepted by Turing machines is the class of Type 0 languages. Theorem 4.6.1 Each Type 0 language is a recursively enumerable language.
Proof Consider any Type 0 grammar G = <N, , P, S>. From G construct a two auxiliarywork-tape Turing machine MG that on a given input x nondeterministically generates some string w in L(G), and then accepts x if and only if x = w. The Turing machine MG generates the string w by tracing a derivation in G of w from S. MG starts by placing the sentential form S in the first auxiliary work tape. Then MG repeatedly replaces the sentential form stored on the first auxiliary work tape with the one that succeeds it in the derivation. The second auxiliary work tape is used as an intermediate memory, while deriving the successor of each of the sentential forms. The successor of each sentential form is obtained by nondeterministically searching for a substring , such that is a production rule in G, and then replacing by in . MG uses a subcomponent M1 to copy the prefix of that precedes onto the second auxiliary work tape. MG uses a subcomponent M2 to read from the first auxiliary work tape and replace it by on the second. MG uses a subcomponent M3 to copy the suffix of that succeeds onto the second auxiliary work tape. MG uses a subcomponent M4 to copy the sentential form created on the second auxiliary work tape onto the first. In addition, MG uses M4 to determine whether the new sentential form is a string in L(G). If w is in L(G), then the control is passed to a subcomponent M5. Otherwise, the control is passed to M1. MG uses the subcomponent M5 to determine whether the input string x is equal to the string w stored on the first auxiliary work tape. Example 4.6.1 Consider the grammar G which has the following production rules.
The language L(G) is accepted by the Turing machine MG, whose transition diagram is given in Figure 4.6.1.
Figure 4.6.1
A Turing machine MG for simulating the grammar G that has the production rules S aSbS and Sb .
The components M1, M2, and M3 scan from left to right the sentential form stored on the first auxiliary work tape. As the components scan the tape they erase its content. The component M2 of MG uses two different sequences of transition rules for the first and second production rules: S aSbS and Sb . The sequence of transition rules that corresponds to S aSbS removes S from the first auxiliary work tape and stores aSbS on the second. The sequence of transition rules that corresponds to Sb removes Sb from the first auxiliary work tape and stores nothing on the second. The component M4 scans from right to left the sentential form in the second auxiliary work tape, erasing the content of the tape during the scanning. M4 starts scanning the sentential form in its first state, determining that the sentential form is a string of terminal symbols if it reaches the blank symbol B while in the first state. In such a case, M4 transfers the control to M5. M4 determines that the sentential form is not a string of terminal symbols if it reaches a nonterminal symbol. In this case, M4 switches from its first to its second state. Theorem 4.6.2 Each recursively enumerable language is a Type 0 language.
Proof The proof consists of constructing from a given Turing machine M a grammar that can simulate the computations of M. The constructed grammar G consists of three groups of production rules.
The purpose of the first group is to determine the following three items. a. An initial configuration of M on some input. b. Some segment for each auxiliary work tape of M. Each segment must include the location under the head of the corresponding tape. c. Some sequence of transition rules of M. The sequence of transition rules must start at the initial state, end at an accepting state, and be compatible in the transitions that it allows between the states. The group of production rules can specify any initial configuration of M, any segment of an auxiliary work tape that satisfies the above conditions, and any sequence of transition rules that satisfies the above conditions. The purpose of the second group of production rules is to simulate a computation of M. The simulation must start at the configuration determined by the first group. In addition, the simulation must be in accordance with the sequence of transition rules, and within the segments of the auxiliary work tapes determined by the first group. The purpose of the third group of production rules is to extract the input whenever an accepting computation has been simulated, and to leave nonterminal symbols in the sentential form in the other cases. Consequently, the grammar can generate a given string if and only if the Turing machine M has an accepting computation on the string. Consider any Turing machine M = <Q, , , , q0, B, F>. With no loss of generality it can be assumed that M is a two auxiliary-work-tape Turing machine (see Theorem 4.3.1 and Proposition 4.3.1), that no transition rule originates at an accepting state, and that N = { | is in } { [q] | q is in Q } {, $, , , , #, S, A, C, D, E, F, K} is a multiset whose symbols are all distinct. From M construct a grammar G = <N, , P, S> that generates L(M), by tracing in its derivations the configurations that M goes through in its accepting computations. The production rules in P are of the following form. a. Production rules for generating any sentential form that has the following pattern.
Each such sentential form corresponds to an initial configuration (q0a1 an$, q0, q0) of M, and a sequence of transition rules i1 it. The transition rules define a sequence of compatible states that starts at the initial state and ends at an accepting state. represents the input head, represents the head of the first auxiliary work tape, and represents the head of the second auxiliary work tape. The string B B B B corresponds to a segment of the first auxiliary work tape, and the string B B B B to a segment of the second.
A string in the language is derivable from the sentential form if and only if the following three conditions hold. 1. 2. The string is equal to a1 an. M accepts a1 an in a computation that uses the sequence of transition rules i1 it. 3. B B B B corresponds to a segment of the ith auxiliary work tape that is sufficiently large for the considered computation of M, 1 i 2. The position of in the segment indicates the initial location of the corresponding auxiliary work-tape head in the segment. The production rules are of the following form.
The production rules for the nonterminal symbols S and A can generate a string of the form a1 an$C for each possible input a1 an of M. The production rules for the nonterminal symbols C and D can generate a string of the form B B B B#E for each possible segment B B B B of the first auxiliary work tape that contains the corresponding head location. The production rules for E and F can generate a string of the form B B B B#[q0] for each possible segment B B B B of the second tape that contains the corresponding head location. The production rules for the nonterminal symbols that correspond to the states of M can generate any sequence i1 it of transition rules of M that starts at the initial state, ends at an accepting state, and is compatible in the transition between the states. b. Production rules for deriving from a sentential form
which corresponds to configuration = ( $, 1 1, 2 2). and are assumed to be two configurations of M such that is reachable from by a move that uses the transition rule ij. For each transition rule the set of production rules have 1. 2. A production rule of the form X X for each X in {, $, #}. A production rule of the form a a, for each symbol a in {, $} that satisfies the following condition: is a transition rule that scans the symbol a on the input tape without moving the input head. A production rule of the form a a , for each symbol a in {, $} that satisfies the following condition: is a transition rule that scans the symbol a in the input tape while moving the input head one position to the right. A production rule of the form a b ab, for each pair of symbols a and b in {, $} that satisfy the following condition: is a transition rule that scans the symbol b in the input tape while moving the input head one position to the left. A production rule of the form X Y for each 1 i 2, and for each pair of symbols X and Y in that satisfy the following condition: is a transition rule that replaces X with Y in the ith auxiliary work tape without changing the head position. A production rule of the form X Y for each 1 i 2, and for each pair of symbols X and Y in that satisfy the following condition: is a transition rule that replaces X with Y in the ith auxiliary work tape while moving the corresponding head one position to the right. A production rule of the form X Y XZ for each 1 i 2, and for each triplet of symbols X, Y, and Z in that satisfy the following condition: is a transition rule that replaces the symbol Y with Z in the ith auxiliary work tape while moving the corresponding head one position to the left.
3.
4.
5.
6.
7.
The purpose of the production rules in (1) is to transport from right to left over the nonhead symbols in { , , }, across a representation
of a configuration of M. gets across the head symbols , , and by using the production rules in (2) through (7). As gets across the head symbols, the production rules in (2) through (7) "simulate" the changes in the tapes of M, and the corresponding heads position, because of the transition rule . c. Production rules for extracting from a sentential form
which corresponds to an accepting configuration of M, the input that M accepts. The production rules are as follows.
Example 4.6.2 Let M be the Turing machine whose transition diagram is given in Figure 4.5.6(a). L(M) is generated by the grammar G that consists of the following production rules. A. Production rules that find a sentential form that corresponds to the initial configuration of M, according to (a) in the proof of Theorem 4.6.2.
B. "Transporting" production rules that correspond to (b.1) in the proof of Theorem 4.6.2, 1 i 5.
C. "Simulating" production rules that correspond to (b.2-b.4) in the proof of Theorem 4.6.2.
D. "Simulating" production rules that correspond to (b.5-b.7) in the proof of Theorem 4.6.2.
E. "Extracting" production rules that correspond to (c) in the proof of Theorem 4.6.2.
Theorem 4.6.2, together with Theorem 4.5.3, implies the following result. Corollary 4.6.1 The membership problem is undecidable for Type 0 grammars or, equivalently, for { (G, x) | G is a Type 0 grammar, and x is in L(G) }. A context-sensitive grammar is a Type 1 grammar in which each production rule has the form 1A 2 1 2 for some nonterminal symbol A. Intuitively, a production rule of the form 1A 2 can be used only if it is within the left context of 1 and the right 1 2 indicates that A context of 2. A language is said to be a context-sensitive language, if it can be generated by a context-sensitive grammar. A language is context-sensitive if and only if it is a Type 1 language (Exercise 4.6.4), and if and only if it is accepted by a linear bounded automaton (Exercise 4.6.5). By definition and Theorem 3.3.1, each context-free language is also context-sensitive, but the converse is false
because the non-context-free language { aibici | i 0 } is context-sensitive. It can also be shown that each context-sensitive language is recursive (Exercise 1.4.4), and that the recursive language LLBA_reject = { x | x = xi and Mi does not have accepting computations on input xi in which at most |xi| locations are visited in each auxiliary work tape } is not context-sensitive (Exercise 4.5.6). Figure 4.6.2
with infinitely many cards in each pile, can one draw a sequence of n
1 cards
from these piles, so that the string xi1 xin formed on the top of the cards will equal the string yi1 yin formed on the bottom? Example 4.7.1 PCP has the solution of yes for the instance <(01, 0), (110010, 0), (1, 1111), (11, 01)> or, equivalently, for the following instance in the case of the domino problem.
The tuple (i1, i2, i3, i4, i5, i6) = (1, 3, 2, 4, 4, 3) is a witness for a positive solution because x1x3x2x4x4x3 = y1y3y2y4y4y3 = 01111001011111. The positive solution has also the witnesses (1, 3, 2, 4, 4, 3, 1, 3, 2, 4, 4, 3), (1, 3, 2, 4, 4,3,1,3,2,4,4,3,1,3,2, 4, 4, 3), etc. On the other hand, the PCP has the solution no for <(0, 10), (01, 1)>. The Undecidability of Post's Correspondence Problem Post's correspondence problem is very useful for showing the undecidability of many other problems by means of reducibility. Its undecidability follows from its capacity for simulating the computations of Turing machines, as exhibited indirectly in the following proof through derivations in Type 0 grammars. Theorem 4.7.1 The PCP is an undecidable problem.
Proof By Corollary 4.6.1 the membership problem is undecidable for Type 0 grammars. Thus, it is sufficient to show how from each instance (G, w) of the membership problem for Type 0 grammars, an instance I can be constructed, such that the PCP has a positive solution at I if and only if w is in L(G). For the purpose of the proof consider any Type 0 grammar G = <N, , P, S> and any string w in *. With no loss of generality assume that #, , and $ are new symbols not in N . Then let the corresponding instance I = <(x1, y1), . . . , (xk, yk)> of PCP be of the following form. PCP has a positive solution at I if and only if I can trace a derivation that starts at S and ends at w. For each derivation in G of the form S in) of a positive solution such that either
1 m
or
depending on whether m is even or odd, respectively. On the other hand, each witness (i1, . . . , in) of a positive solution for PCP at I has a smallest integer t 1 such that xi1 xit = yi1 yit. In such a case, xi1 xit = yi1 yit = S # 2 # for some derivation S * * * * * w. m 1 2 m The instance I consists of pairs of the following form a. A pair of the form (S , ). b. A pair of the form ($, $). c. A pair of the form (X, ), and a pair of the form ( , X), for each symbol X in {#}.
d. A pair of the form ( , ), and a pair of the form ( , ), for each production rule in G that satisfies . e. A pair of the form (X, ), and a pair of the form ( , X ), for each production rule in G and for each X in N {#}. The underlined symbols are introduced to allow only (S , ) as a first pair, and ($, $) as a last pair, for each witness of a positive solution. The pair (S , ) in (a) is used to start the tracing of a derivation at S. The pair ($, $) in (b) is used to end the tracing of a derivation at w. The other pairs are used to force the tracing to go from each given sentential form to a sentential form ', such that * '. The tracing is possible because each of the pairs (xi, yi) is defined so that yi provides a "window" into , whereas xi provides an appropriate replacement for yi in '. The pairs of the form (X, ) and ( , X) in (c) are used for copying substrings from to '. The pairs of the form ( , ) and ( , ), , in (d) are used for replacing substrings in by substrings in '. The pairs of the form (X, ) and ( , X ) in (e) are used for replacing substrings in by the empty string in '. The window is provided because for each 1 yij satisfy the following properties. a. i1, . . . , ij k, the strings x = xi1 xij and y = yi1
If x is a prefix of y, then x = y. Otherwise there would have been a least l such that xi1 xil is a proper prefix of yi1 yil. In which case (xil, yil) would be equal to (v, uvv') for some nonempty strings v and v'. However, by definition, no pair of such a form exists in I. b. If y is a proper prefix of x, then the sum of the number of appearances of the symbols # and in x is equal to one plus the sum of the number of appearances of # and in y. Otherwise, there would be a least l > 1 for which xi1 xil and yi1 yil do not satisfy the property. In such a case, because of the minimality of l, xil and yil would have to differ in the number of # and they contain. That is, by definition of I, (xil, yil) would have to equal either ($, $) or ( , ). However, ($, ) is an impossible choice because it implies that xi1 xil = yi1 yil, and ( , ) is an impossible choice because it implies that xi1 xil-1 = yi1 yil-1 (and hence that the property holds). The correctness of the construction can be shown by induction on the number of production rules used in the derivation under consideration or, equivalently, on the number of pairs of type (d) and (e) used in the given witness for a positive solution. Example 4.7.2 If G is a grammar whose set of production rules is {S aSaSaS, aS }, then the instance of the PCP that corresponds to (G, ) as determined by the proof of Theorem 4.7.1, is <(S , ), ( , S), (aSaSaS, ), ( , #aS), (#, ), ( , aaS), (a, ), ( , SaS), (S, ), (#, ), ( , #), (a, ), ( , a), (S, ), ( , S), ($, $)>.
The instance has a positive solution with a witness that corresponds to the arrangement in Figure 4.7.1.
Figure 4.7.1
An arrangement of PCP cards for describing a derivation for , in the grammar that consists of the production rules S aSaSaS and aS . *S aSaSaS aSaS aS in G.
The witness also corresponds to the derivation S Applications of Post's Correspondence Problem
The following corollary exhibits how Post's correspondence problem can be used to show the undecidability of some other problems by means of reducibility. Corollary 4.7.1 The equivalence problem is undecidable for finite-state transducers.
Proof Consider any instance <(x1, y1), . . . , (xk, yk)> of PCP. Let be the minimal alphabet such that x1, . . . , xk, y1, . . . , yk are all in *. With no loss of generality assume that = {1, . . . , k} is an alphabet. Let M1 = <Q1, , , 1, q0, F1> be a finite-state transducer that computes the relation * *, that is, a finite-state transducer that accepts all inputs over , and on each such input can output any string over . Let M2 = <Q2, , , 2, q0, F2> be a finite-state transducer that on input i1 in outputs some w such that either w xi1 xin or w yi1 yin. Thus, M2 on input i1 in can output any string in * if xi1 xin yi1 yin. On the other hand, if xi1 xin = yi1 yin, then M2 on such an input i1 in can output any string in *, except for xi1 xin. It follows that M1 is equivalent to M2 if and only if the PCP has a negative answer at the given instance <(x1, y1), . . . , (xk, yk)>. Example 4.7.3 Consider the instance <(x1, y1), (x2, y2)> = <(0, 10), (01, 1)> of PCP. Using the terminology in the proof of Corollary 4.7.1, = {0, 1} and = {1, 2}. The finite-state transducer M1 can be as in Figure 4.7.2(a),
Figure 4.7.2
The finite-state transducer in (a) is equivalent to the finite-state transducer in (b) if and only if the PCP has a positive solution at <(0, 10), (01, 1)>.
and the finite-state transducer M2 can be as in Figure 4.7.2(b). M2 on a given input i1 in nondeterministically chooses between its components Mx and My. In Mx it outputs a prefix of xi1 xin, and in My it outputs a prefix of yi1 yin. Then M2 nondeterministically switches to M>, M<, or M . M2 switches from Mx to M> to obtain an output that has xi1 xin as a proper prefix. M2 switches from Mx to M< to obtain an output that is proper prefix of xi1 xin. M2 switches from Mx to M to obtain an output that differs from xi1 xin within the first |xi1 xin| symbols. M2 switches from My to M>, M<, M for similar reasons, respectively. The following corollary has a proof similar to that given for the previous one. Corollary 4.7.2 The equivalence problem is undecidable for pushdown automata.
Proof Consider any instance <(x1, y1), . . . , (xk, yk)> of PCP. Let 1 be the minimal alphabet such that x1, . . . , xk, y1, . . . , yk are all in 1*. With no loss of generality assume that 2 = {1, . . . , k} is an alphabet, that 1 and 2 are mutually disjoint, and that Z0 is a new symbol not in 1. Let M1 = <Q1, strings in ( 1
1
Z0, 1, q0, Z0, F1> be a pushdown automaton that accepts all the 2)*. (In fact, M1 can also be a finite-state automaton.)
1
2,
Let M2 = <Q2, 1 2, 1 Z0, 2, q0, Z0, F2> be a pushdown automaton that accepts an input w if and only if it is of the form in i1u, for some i1 in in 1* and some u in 2*, such that either u xi1 xin or u yi1 yin. It follows that M1 and M2 are equivalent if and only if the PCP has a negative answer at the given instance. The pushdown automaton M2 in the proof of Corollary 4.7.2 can be constructed to halt on a given input if and only if it accepts the input. The constructed pushdown automaton halts on all inputs if and only if the PCP has a negative solution at the given instance. Hence, the following corollary is also implied from the undecidability of PCP. Corollary 4.7.3 The uniform halting problem is undecidable for pushdown automata.
PCP is a partially decidable problem because given an instance <(x1, y1), . . . , (xk, yk)> of the problem one can search exhaustively for a witness of a positive solution, for example, in {1, . . . , k}* in canonical order. With such an algorithm a witness will eventually be found if the instance has a positive solution. Alternatively, if the instance has a negative solution, then the search will never terminate. [next] [prev] [prev-tail] [front] [up]