We have seen that the difficulty of recognising
a language, or of solving a problem, depends much more on the its complexity
class than on any technological properties of the machine on which the solution
runs.
Some problems are fundamentally harder to solve than others.
Those of exponential complexity are fundamentally intractable in their full
generality because any physical machine would very quickly run out of resources
as the input size was increased.
So if we discover that the problem we have to solve is in this class, we might
as well give up!
Problems of polynomial complexity are more interesting, and more problematic.
DTIME(ni), for
some i*1, is the class of problems whose deterministic time complexity
is O(ni), that is, those that can be solved by a deterministic
TM that performs no more than O(ni) steps on an input of length
n.
The union of these classes of problem, for all i*1, contains all
the problems that can be solved in deterministic polynomial time.
We denote this entire collection of problems by P.
Those with high values of i are, of course, very expensive to compute;
an algorithm of O(n50) would hardly be considered efficient.
However, in practice, problems in P usually have relatively low-degree
polynomial solutions.
It has therefore been conjectured that P is the class of tractable
problems, that is, those that can be effectively computed.
A number of important problems appear not
to be in P but to have efficient nondeterministic algorithms.
NTIME(ni), for some i*1, is the class of problems
whose nondeterministic time complexity is O(ni), that is,
those that can be solved by a nondeterministic TM that performs no more than
O(ni) steps on an input of length n.
The union of these classes of problem, for all i*1, contains all
the problems that can be solved in nondeterministic polynomial time.
We denote this entire collection of problems by NP.
Example 1: The Hamiltonian Circuit Problem (HC)
Given any non-directed graph, is there a path that visits each vertex exactly once and returns to its starting vertex?
The size of the input to this problem is the number of vertices in the graph.
No known deterministic algorithm can recognise Hamiltonian graphs in polynomial time.
However, there is a trivial non-deterministic algorithm: guess the edges in the cycle, then verify that they do indeed form a Hamiltonian circuit.
And this algorithm is polynomial in time.
This distinction between P
and NP is analogous to the difference between efficiently finding
a proof of some theorem in a formal system and checking that a presented 'proof'
is valid.
Intuition suggests that proof checking should be less complex than theorem proving,
but we do not know this for a fact.
We can define further classes of problem
in a similar way. For example,
DSPACE(ni), for some i*1, is the class of problems
whose deterministic space complexity is O(ni).
The union of these classes of problem, for all i*1, contains all
the problems that can be solved in deterministic polynomial space.
We denote this entire collection of problems by PSPACE.
It has been shown that
So at least one of the containments in
the first line is proper, although it is not known which!
Back
to Topic 9
The reduction of a language, L', to another language, L, is a mapping g (computed by a TM that always halts) such that, for all strings x,
x is in L' if and only if g(x) is in L.
We say that L' is polynomial-time reducible to L if
there is a polynomial-time bound TM that, for each input string x, produces an output string y that is in L if and only if x is in L'.
Let L' be polynomial-time reducible to L. Then:
L' is in P if L is in P.
Assume that the reduction is p1(n)-time
bounded and that L is recognisable in time p2(n),
where p1 and p2 are polynomials.
Then L' can be recognised in polynomial time as follows:
Therefore, L is in P.
Similarly, we can prove that
If L' is polynomial-time reducible to L, then
L' is in NP if L is in NP.
There are other kinds of reduction. For
example:
A log-space transducer is an off-line TM that always halts, uses log
n storage cells for an input string of length n, and has a write-only
output tape on which the head never moves left.
We say that L' is log-space reducible to L if
there is a log-space transducer that, for each input string x, produces an output string y that is in L if and only if x is in L'.
It can be shown that
if L' is log-space reducible to L, then
L' is in P if L is in P.
A language L is complete, for some class C of languages, with respect to polynomial-time (or, respectively, log-space) reductions, if
A language L is hard, for some class C of languages, with respect to polynomial-time (or, respectively, log-space) reductions, if
The special cases that are of primary importance in computation are:
The significance of the class NP-complete
is that it includes many frequently encountered problems which have been studied
in great detail but for which no polynomial-time solutions are known.
This failure to discover any such solution strongly suggests that none of them
is effectively computable, but there is no proof of this conjecture.
The first problem that was shown to be NP-complete (by Stephen A. Cook in 1971) was that of the satisfiability of Bolean expressions.
Given any propositional logic formula with m variables, is there an assignment of logical values {T, F} to those variables that makes the formula evaluate to T?
The first part of the proof that this problem
is NP-complete consists of showing that it is in NP.
Consider a NTM that generates successive sequences of logical values (as many
of them as there are distinct logical variables in the input string), substitutes
them for the corresponding variables and traverses the resulting formula several
times, performing the reductions defined by the Boolean truth tables as it goes,
until the formula is reduced to T or F. This clearly requires
no more than polynomial time.
Note that this is equivalent to saying that the language consisting of all strings representing satisfiable, well-formed formulae of propositional calculus is in NP.
Although these formulae have an arbitrary alphabet, since we may introduce new logical variables at will, a language, Lsat , may be defined with a finite alphabet, consisting of the logical symbols, a single non-logical symbol, say x, and the binary digits 0 and 1. Then every logical symbol may be uniquely coded as x followed by a suitable binary integer.
In any expression with n variables, there are no more than n/2 different variables, each of which may be encoded with no more than 1+log2n symbols. The length of the coded form of such an expression is therefore no more than nlogn.
We may therefore consider words in Lsat to be of the same length as the expressions thay represent, since the result will not depend on whether we use n or nlogn for the length of the word, because log(nlogn) ¾ 2logn, and we will be dealing withlog-space reductions.
The second part of the proof consists in
showing that every language in NP is reducible to Lsat.
This may be done by giving, for every NTM M that is time-bounded by a
poynomial p(n), a log-space algorithm that takes as input a string x
and produces a Boolean formula Ex that is satisfiable if and
only if M accepts x.
The detailed construction of this algorithm may be found in the literature (e.g.
Hopcroft and Ullman, p.325) and will not be covered here.
Its existence is enough to demonstrate that, if there was a polynomial-time
algorithm for accepting Lsat, then that could be used, in
conjunction with this log-space reduction, to to accept any language in NP
in polynomial time.
And that would be enough to imply that P = NP.
Using similar techniques, many other problems have been shown to be log-space reducible to SAT, or to problems that themselves are log-space reducible to SAT. These inlcude:
If you have comments or
suggestions, email to author at b.cohen@city.ac.uk
or me Vid-Com-Ram
/ members.tripod.com/bummerang
หมายเหตุ : เอกสารเวปหน้านี้นำมาจากมหาวิทยาลัยแห่งหนึ่งในประเทศอังกฤษ
เป็นหลักสูตรคล้ายคลึงกับ CT313 นำมาเพื่อเป็นเอกสารเปรียบเทียบและอ้างอิง มิได้หวังผลประโยชน์ใดๆ
และขอบอกว่าเค้ามีลิขสิทธิ์นะ