Complexity Theory - Chapter 01
Complexity Theory - Chapter 01
Complexity Theory - Chapter 01
Computational Problem
Problem instances
1
complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance
of this problem is a rather concrete utterance, which can serve as the input for a decision
problem. For example, consider the problem of primality testing. The instance is a number (e.g.
15) and the solution is "yes" if the number is prime and "no" otherwise (in this case "no"). Stated
another way, the instance is a particular input to the problem, and the solution is the output
corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following
instance of the decision version of the traveling salesman problem: Is there a route of at most
2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to
this particular problem instance is of little use for solving other instances of the problem, such as
asking for a round trip through all sites in Milan whose total length is at most 10 km. For this
reason, complexity theory addresses computational problems and not particular problem
instances.
Decision problems as formal languages
A decision problem has only two possible outputs, yes or no (or alternately 1 or 0) on any input.
Decision problems are one of the central objects of study in computational complexity theory. A
decision problem is a special type of computational problem whose answer is either yes or no, or
alternately either 1 or 0. A decision problem can be viewed as a formal language, where the
members of the language are instances whose output is yes, and the non-members are those
instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a
given input string is a member of the formal language under consideration. If the algorithm
2
deciding this problem returns the answer yes, the algorithm is said to accept the input string,
otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary graph. The problem
consists in deciding whether the given graph is connected, or not. The formal language
associated with this decision problem is then the set of all connected graphs—of course, to obtain
a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Function problems
A function problem is a computational problem where a single output (of a total function) is
expected for every input, but the output is more complex than that of a decision problem, that is,
it isn't just yes or no. Notable examples include the traveling salesman problem and the integer
factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of
decision problems. However, this is not really the case, since function problems can be recast as
decision problems. For example, the multiplication of two integers can be expressed as the set of
triples (a, b, c) such that the relation a × b = c holds. Deciding whether a given triple is a
member of this set corresponds to solving the problem of multiplying two numbers.
3
can determine where one team or initiative or company ends and another begins. Managers
interfere at their peril.
Turing Machine
A Turing machine is the original idealized model of a computer, invented by Alan Turing in
1936. Turing machines are equivalent to modern electronic computers at a certain theoretical
level, but differ in many details.
A Turing machine consists of a line of cells known as the "tape", together with a single active
cell, known as the "head". The cells on the tape can have a certain set of possible colors, and the
head can be in a certain set of possible states.
Any particular Turing machine is defined by a rule which specifies what the head should do at
each step. The rule looks at the state of the head, and the color of the cell that the head is on.
Then it specifies what the new state of the head should be, what color the head should "write"
onto the tape, and whether the head should move left or right.
The prize Turing machine has two possible states of its head, and three possible colors on its
tape. The animation below shows the operation of the machine, with the states of the head
represented by the orientation of the arrows. In the example shown, the Turing machine starts from a
"blank" tape in which every cell is white. In the analogy with a computer, the "tape" of the Turing
machine is the computer memory, idealized to extend infinitely in each direction.
The initial arrangement of colors of cells on the tape corresponds to the input given to the
computer. This input can contain both a "program" and "data". The steps of the Turing machine
correspond to the running of the computer.
The rules for the Turing machine are analogous to machine-code instructions for the computer.
Given particular input, each part of the rule specifies what "operation" the machine should
perform.
The remarkable fact is that certain Turing machines are "universal", in the sense that with
appropriate input, they can be made to perform any ordinary computation.
Not every Turing machine has this property; many can only behave in very simple ways. In
effect, they can only do specific computations; they cannot act as "general-purpose computers".
4
This prize is about determining how simple the rules for a Turing machine can be, while still
allowing the Turing machine to be "universal".
A universal Turing machine has the property that it can emulate any other Turing machine---or
indeed any computer or software system. Given rules for the thing to be emulated, there is a way
to create initial conditions for the universal Turing machine that will make it do the emulation.
Turing machines are widely used in theoretical computer science for proving abstract theorems.
Studying specific Turing machines has been rare.
Complexity measures
For a precise definition of what it means to solve a problem using a given amount of time and
space, a computational model such as the deterministic Turing machine is used. The time
required by a deterministic Turing machine M on input x is the total number of state transitions,
or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing
machine M is said to operate within time f(n), if the time required by M on each input of length n
is at most f(n). A decision problem A can be solved in time f(n) if there exists a Turing machine
operating in time f(n) that solves the problem. Since complexity theory is interested in
classifying problems based on their difficulty, one defines sets of problems based on some
5
criteria. For instance, the set of problems solvable within time f(n) on a deterministic Turing
machine is then denoted by DTIME(f(n)).
Analogous definitions can be made for space requirements. Although time and space are the
most well-known complexity resources, any complexity measure can be viewed as a
computational resource. Complexity measures are very generally defined by the Blum
complexity axioms. Other complexity measures used in complexity theory include
communication complexity, circuit complexity, and decision tree complexity. The complexity of
an algorithm is often expressed using big O notation.
The notion of communication complexity was introduced by Yao in 1979, who investigated the
following problem involving two separated parties (Alice and Bob). Alice receives an n-bit string
x and Bob another n-bit string y, and the goal is for one of them (say Bob) to compute a certain
function f(x,y) with the least amount of communication between them. Note that here we are not
concerned about the number of computational steps, or the size of the computer memory used.
Communication complexity tries to quantify the amount of communication required for such
distributed computations.
Of course they can always succeed by having Alice send her whole n-bit string to Bob, who then
computes the function, but the idea here is to find clever ways of calculating f with fewer than n
bits of communication.
This abstract problem is relevant in many contexts: in VLSI circuit design, for example, one
wants to minimize energy used by decreasing the amount of electric signals required between the
different components during a distributed computation. The problem is also relevant in the study
of data structures, and in the optimization of computer networks.
Alan Turing’s analysis attempting to formalize the class of all effective procedures was carried
out in 1936, resulting in the notion of a Turing machine. Its importance is that it was the first
really general analysis to understand how it is that computation takes place, and that it led to a
convincing and widely accepted abstraction of the concept of effective procedure.
It is worth noting that Turing’s analysis was done before any computers more powerful than desk
calculators had been invented. His insights led, more or less directly, to John von Neumann’s
6
invention in the 1940’s of the stored program digital computer, a machine with essentially the
same underlying architecture as today’s computers.
Any other changes can be split up into simple changes of this kind. The situation in regard to the
squares whose symbols may be altered in this way is the same as in regard to the observed
squares. We may, therefore, without loss of generality, assume that the squares whose symbols
are changed are always “observed” squares. Besides these changes of symbols, the simple
operations must include changes of distribution of observed squares. The new observed squares
must be immediately recognizable by the computer.
- The Term computation refers to the process of productivity on output from a set of inputs
in a finite number of steps.
7
xy = concatenation of two strings x and y
Existing Problems
- Decision Problem
- Search Problem
- Optimization Problem
8
- Counting Problem
There are many questions that raise the problem concern on the social life and make a complex
status of computability like:
Let’s start with the multiplication of two Integers. There may be two possible operations for the
multiplication of the integers, First method is repeated addition and second is grade school
algorithm.
it needs 422 times addition using the first method but using grade school algorithm it need only 3
multiplication and 3 addition.
It shows that the size of inputs is the number of digits in the number.
The number of basic operation used to multiply two digit number is at most 2n2 for the grade
school algorithm and at least n10n-1 for repeated addition.
Model of Computation
9
Models: Recursive Functions, Rewriting Systems, Turing Machines.
All equally powerful.
10
“Acceptors” produce only a binary (accept/reject) output. “Transducers” can produce more
complicated results. So far all our previous discussions were only with acceptors. A Turing
Machine also accepts or rejects its input. The results left on the tape when the Turing Machine
finishes can be regarded as the “output” of the computation. Therefore a Turing Machine is a
“Transducer”.
11
a1 a2 a3 ….. b b …..
R/W head
Finite Control
Fig. 1 Turing machine model.
Each cell can store only one symbol. The input to and the output from the finite state automaton
are effected by the R/W head which can examine one cell at a time. In one move, the machine
examines the present symbol under the R/W head on the tape and the present state of an
automaton to determine
(i) a new symbol to be written on the tape in the cell under the RAY head,
(ii) a motion of the RAY head along the tape: either the head moves one cell left (L), or one cell
right (R),
(iii) the next state of the automaton, and
(iv) Whether to halt or not.
The above model can be rigorously defined as follows:
12
Turing Machines are like simplified computers containing:
A tape to read/write on
Contains squares with one symbol each
Is used for input, output and temporary storage
Unbounded
A read/write head
Can change the symbol on the tape at current position
Moves step by step in either direction
A finite state machine
Including an initial state and final states
Looks simple, but is very powerful
Standard model for the rest of the course
Operation:
Start in state q0, input w is on tape, head over its _rst symbol
Each step:
Read current state q and symbol a at current position
Lookup δ (q, a) = (p, b, D )
Change to state p, write b, move according to D
Stop as soon as q ϵ F. Left on tape: Output
- Conguration (w, q, v) denotes status after each step:
Tape contains wv (with innately many around)
Head is over first symbol of v
Machine is in state q
- Start conguration: (e, q0, w) if input is w
- End conguration: (v; q; z) for a q 2 F
Output is z, denoted by M(w)
In case machine doesn't halt (!): M(w) = %
13
Turing machines are finite objects!
Effective encoding into words over an alphabet
Also configurations are finite! Encode them also
Simulator machine U only needs to
Receive an encoded M as input
Input of M is w, give that also to U
U maintains encoded configurations of M and applies steps
Let (M) be encoding of machine M.
Example
b a4 a1 a2 a1 a2 a2 a1 a4 a2 b b
R/W head
14
State q3
Solution
The present symbol under the RJW head is a1. The present state is q3' So al is written to the right
of q3. The nonblank symbols to the left of al form the string a4a1a2a1a2a2, which is written to
the left of q3' The sequence of nonblank symbols to the right of a1 is a4a2. Thus the ID is as
given in Figure below
a4a1a2a1a2a2 q3 a1 a4a2
Figure: Representation of ID
Notes: (1) For constructing the ID, we simply insert the current state in the input string to the left
of the symbol under the RIW head.
(2) We observe that the blank symbol may occur as part of the left or right substring.
Moves in a TM
As in the case of pushdown automata, 8(q, x) induces a change in ID of the Turing machine. We
call this change in ID a move.
Suppose 8(q, Xj) =(P, y, L). The input string to be processed is X1,X2, ... Xn, and the present
symbol under the R/W head is Xj So the ID before processing Xi is
X1X2……Xj-1qXi……..Xn
15
After processing Xi, the resulting ID is
Complexity theory also relates to knowledge management (KM) and organizational learning
(OL). "Complex systems are, by any other definition, learning organizations. Complexity theory
is, therefore, on the verge of making a huge contribution to both KM and OL." Complexity
Theory, KM, and OL are all complimentary and co-dependent. “KM and OL each lack a theory
of how cognition happens in human social systems complexity theory offers this missing piece”.
In 1997, a think tank called Knowledge Management Consortium International (KCMI) was
formed in Washington, DC. The formation of the group acknowledged, "the profound connection
between complexity theory and knowledge management". Complexity theory offers new
approaches to some of the questions that Peter Senge has posed in the field of KM. "In what has
only recently become apparent, the issues Senge speaks of are precisely those that scholars and
researchers of complexity theory have been dealing with for the past 15 years.
Beginning in the early 1990s, theorists began linking complexity theory to organizational
change. Complexity Theory rejects the idea of organizations as a machine, as well as a planned
approach to organizational change. Rather, it agrees with the emergent approach that power and
constant change are crucial elements of organizational life. It can also be used to explain the
often paradoxical nature of organizations.
16