Turing machines can only have a finite number of states. So you can do all computation in a single step only if the problem has a finite number of possible inputs. At that point, your program is basically a lookup table.
Nontrivial computational problems will have an infinite number of possible inputs, so this strategy won’t work in general.
Such a table cannot really be created because it is too large, not just for computing, but even for storing in memory if it were somehow given to you. It is not out of the question that computing resources continues to grow enough such that it eventually becomes feasible, but we have no idea if they will, and it would be a long time in the future.
Theoretical Turing machines are very simple, but have infinite resources, and are thus a bad way of determining the difficulty of things.
Theoretical Turing machines are very simple, but have infinite resources, and are thus a bad way of determining the difficulty of things.
If you can’t do it with a Turing machine, maybe you can’t do it. This sort of hedges against ‘does someone come up with a better algorithm’, or a more efficient data structure, so things don’t get broken when something we thought was impossible is accomplished.
Other approaches include working out a minimum amount of computation, then looking at:
Practical costs (today)*
Minimum amount of energy used in theory, per physics laws re: deleting information
(Both might get thrown out of whack by quantum computers, if they pan out. Also, reversible computing might lower theoretical and practical costs.)
*This one can be affected by ‘if someone buys hardware specifically for doing this once, they can reuse it’ , if there’s not enough padding.
Yes, I suppose I left out that you can determine that something can’t be computed if you couldn’t do it with a Turing machine. Proofs of impossibility are actually somewhat important.
Practical costs today are a shifting sand, but worthwhile for difficult and important tasks, while being useless on a number of other ones due to the difficulty of determining them. What algorithm, and what should be the reference computer? (Or reference computer building / buying process. It would be silly to include a massive R&D program, but what about a small one that doesn’t take very long to make a component that vastly improves results?)
Reversible computing is a huge question mark, but so are the lower bounds of minimum energy used to delete information. [I think the theories haven’t really been tested in that area, because we have no idea how to, and are many orders of magnitude away.] On a side note, I expect based purely on my own intuition that reversible computing will actually end up taking more energy in all practical cases than the minimum reachable efficiency of nonreversible.
Quantum computing does change the mark quite a bit cost on very specific algorithms if quantum computers actually manage to scale to large enough sizes for them, but that is still many orders of magnitude for most problems they are likely superior for. Nonetheless, I think that an algorithm that is improved sufficiently by a quantum computer should be considered using a quantum version as soon as we have good number for it.
That depends what you mean by “computational problems”. In its usual definition, a Turing machine takes a single input string, and provides a single bit as output: “Accept” or “Reject”. (It may also of course run forever without providing an answer.)
For example, the question, “What is the shortest path through this list of cities?” isn’t possible to encode directly into a standard Turing machine. Instead, the Turing machine can answer, “Is there a path through this list of cities with length at most k?”, for some constant k.
If you don’t like this, there are ways to modify the definition of a Turing machine. But for the purposes of studying computational complexity, all reasonable definitions seem to be equivalent.
I think the machine halting can be interpreted as accepting and you mgiht be allowed to leave a number on the tape.
I was wondering whether cases like the halting problem might be intedresting edgecases but TMs are not especially inferior. Church-turing thesis is about there not being anything interesting missed by what is captured by machines.
Turing machines can only have a finite number of states. So you can do all computation in a single step only if the problem has a finite number of possible inputs. At that point, your program is basically a lookup table.
Nontrivial computational problems will have an infinite number of possible inputs, so this strategy won’t work in general.
Can you actually produce a look up table for perfect chess play?
Such a table cannot really be created because it is too large, not just for computing, but even for storing in memory if it were somehow given to you. It is not out of the question that computing resources continues to grow enough such that it eventually becomes feasible, but we have no idea if they will, and it would be a long time in the future.
Theoretical Turing machines are very simple, but have infinite resources, and are thus a bad way of determining the difficulty of things.
If you can’t do it with a Turing machine, maybe you can’t do it. This sort of hedges against ‘does someone come up with a better algorithm’, or a more efficient data structure, so things don’t get broken when something we thought was impossible is accomplished.
Other approaches include working out a minimum amount of computation, then looking at:
Practical costs (today)*
Minimum amount of energy used in theory, per physics laws re: deleting information
(Both might get thrown out of whack by quantum computers, if they pan out. Also, reversible computing might lower theoretical and practical costs.)
*This one can be affected by ‘if someone buys hardware specifically for doing this once, they can reuse it’ , if there’s not enough padding.
Yes, I suppose I left out that you can determine that something can’t be computed if you couldn’t do it with a Turing machine. Proofs of impossibility are actually somewhat important.
Practical costs today are a shifting sand, but worthwhile for difficult and important tasks, while being useless on a number of other ones due to the difficulty of determining them. What algorithm, and what should be the reference computer? (Or reference computer building / buying process. It would be silly to include a massive R&D program, but what about a small one that doesn’t take very long to make a component that vastly improves results?)
Reversible computing is a huge question mark, but so are the lower bounds of minimum energy used to delete information. [I think the theories haven’t really been tested in that area, because we have no idea how to, and are many orders of magnitude away.] On a side note, I expect based purely on my own intuition that reversible computing will actually end up taking more energy in all practical cases than the minimum reachable efficiency of nonreversible.
Quantum computing does change the mark quite a bit cost on very specific algorithms if quantum computers actually manage to scale to large enough sizes for them, but that is still many orders of magnitude for most problems they are likely superior for. Nonetheless, I think that an algorithm that is improved sufficiently by a quantum computer should be considered using a quantum version as soon as we have good number for it.
Are there computational problems that can’ t be represented with turing machines?
That depends what you mean by “computational problems”. In its usual definition, a Turing machine takes a single input string, and provides a single bit as output: “Accept” or “Reject”. (It may also of course run forever without providing an answer.)
For example, the question, “What is the shortest path through this list of cities?” isn’t possible to encode directly into a standard Turing machine. Instead, the Turing machine can answer, “Is there a path through this list of cities with length at most k?”, for some constant k.
If you don’t like this, there are ways to modify the definition of a Turing machine. But for the purposes of studying computational complexity, all reasonable definitions seem to be equivalent.
I think the machine halting can be interpreted as accepting and you mgiht be allowed to leave a number on the tape.
I was wondering whether cases like the halting problem might be intedresting edgecases but TMs are not especially inferior. Church-turing thesis is about there not being anything interesting missed by what is captured by machines.