The Turing model is pretty outdated, and doesn’t really describe everything modern computers do.
However—you can run a simulation of a modern computer inside the Turing model (essentially running an interpreted, rather than compiled, language). Which means, very roughly, that any problems that are provably undecidable in the Turing model are necessarily undecidable in the RASP model.
In order to exceed the limitations of the Turing model, it isn’t sufficient to be able to do things Turing computers can’t; you must be capable of doing things Turing computers can’t even simulate. An additional hardware logic gate doesn’t cut it, if you can create the same logic by mixing existing logic gates. You have to create logic which cannot be described in existing logic systems. (If anybody wants to try this, good luck. The -thought- makes my brain hurt.)
Note that the proposed solution for, for example, the Halting Problem is not in fact solving something that Turing computers can’t, as the solution is effectively to run a turing algorithm for an infinite number of steps. It’s taking out one of the assumptions that went into the proof, that you don’t -get- an infinite number of steps. And if anybody is taking this seriously, I pose a question: What happens when the cardinality of the infinite operations to solve the problem is greater than the cardinality of the the infinite time spent solving that problem?
ETA: −6? Seriously? I’m not sure exactly what’s being downvoted here, but I assume it’s my comment that the Turing model is outdated. Well, it is. Not going to apologize for it; computers haven’t even vaguely resembled its common-memory-model in decades, and we regularly run algorithms which are far more efficient than the Turing model allows. Even RASP is hopelessly outdated at this point, but it is at least closer to what computers are actually doing. Given that the Church-Turing thesis has not in fact been proven (largely because those trying to prove it gave up on trying to define what exactly it meant), the Turing model largely persists because of its simplicity.
Taking the karma hit to clarify why I, at least, downvoted this comment. When you say the Turing model is outdated, you seem to be assuming that the model was originally intended as a physical model of how actual computers do (or should) work. But that was never its purpose. It was supposed to be a mathematical model that captures the intuitive notion of an effective procedure. All the talk of tapes and tape heads is just meant to aid understanding, and maybe that part is outdated, but the actual definition of a Turing machine itself can be given purely mathematically, without any assumptions about physical instantiation. Saying the Turing model is outdated would only make sense if there were good reason to doubt the Church-Turing thesis, and there isn’t.
I seriously disagree, incidentally. For example, it has some pretty specific assumptions about instantiation—it will be the sort of computer they had seventy some odd years ago. Because the single pool of memory, the single processor, the scan-one mechanism of traversal, all of these are assumptions which have serious effects on the very field the mathematical model was devised to consider, computability.
(And I can point out one good reason to doubt the Church-Turing thesis. A Turing machine is incapable of replicating the nondeterministic behavior of concurrent processing. I’ve never yet seen a process which depended on this nondeterminism, but it is nonetheless possible to devise one. Maybe to generate random numbers, I’m not sure.)
The Turing model is pretty outdated, and doesn’t really describe everything modern computers do.
However—you can run a simulation of a modern computer inside the Turing model (essentially running an interpreted, rather than compiled, language). Which means, very roughly, that any problems that are provably undecidable in the Turing model are necessarily undecidable in the RASP model.
In order to exceed the limitations of the Turing model, it isn’t sufficient to be able to do things Turing computers can’t; you must be capable of doing things Turing computers can’t even simulate. An additional hardware logic gate doesn’t cut it, if you can create the same logic by mixing existing logic gates. You have to create logic which cannot be described in existing logic systems. (If anybody wants to try this, good luck. The -thought- makes my brain hurt.)
Note that the proposed solution for, for example, the Halting Problem is not in fact solving something that Turing computers can’t, as the solution is effectively to run a turing algorithm for an infinite number of steps. It’s taking out one of the assumptions that went into the proof, that you don’t -get- an infinite number of steps. And if anybody is taking this seriously, I pose a question: What happens when the cardinality of the infinite operations to solve the problem is greater than the cardinality of the the infinite time spent solving that problem?
ETA: −6? Seriously? I’m not sure exactly what’s being downvoted here, but I assume it’s my comment that the Turing model is outdated. Well, it is. Not going to apologize for it; computers haven’t even vaguely resembled its common-memory-model in decades, and we regularly run algorithms which are far more efficient than the Turing model allows. Even RASP is hopelessly outdated at this point, but it is at least closer to what computers are actually doing. Given that the Church-Turing thesis has not in fact been proven (largely because those trying to prove it gave up on trying to define what exactly it meant), the Turing model largely persists because of its simplicity.
Taking the karma hit to clarify why I, at least, downvoted this comment. When you say the Turing model is outdated, you seem to be assuming that the model was originally intended as a physical model of how actual computers do (or should) work. But that was never its purpose. It was supposed to be a mathematical model that captures the intuitive notion of an effective procedure. All the talk of tapes and tape heads is just meant to aid understanding, and maybe that part is outdated, but the actual definition of a Turing machine itself can be given purely mathematically, without any assumptions about physical instantiation. Saying the Turing model is outdated would only make sense if there were good reason to doubt the Church-Turing thesis, and there isn’t.
Thank you.
I seriously disagree, incidentally. For example, it has some pretty specific assumptions about instantiation—it will be the sort of computer they had seventy some odd years ago. Because the single pool of memory, the single processor, the scan-one mechanism of traversal, all of these are assumptions which have serious effects on the very field the mathematical model was devised to consider, computability.
(And I can point out one good reason to doubt the Church-Turing thesis. A Turing machine is incapable of replicating the nondeterministic behavior of concurrent processing. I’ve never yet seen a process which depended on this nondeterminism, but it is nonetheless possible to devise one. Maybe to generate random numbers, I’m not sure.)