A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you’ve got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there’s a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it’s related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman’s Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the “Best Textbooks on Every Subject” thread to see if there’s a consensus on another.
Curious, does “memory space” mean something more than just “memory”?
Just a little more specific. Some people may hear “memory” and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a ‘really good memory’ we don’t tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that’s equivalent to the one-tape version) of symbols that’s infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you’re actually working with uses for storage.
A computational system is Turing complete if certain features of its operation can reproduce those of a Turing machine, which is a sort of bare-bones abstracted model of the low-level process of computation. This is important because you can, in principle, simulate the active parts of any Turing complete system in any other Turing complete system (though doing so will be inefficient in a lot of cases); in other words, if you’ve got enough time and memory, you can calculate anything calculable with any system meeting a fairly minimal set of requirements. Thanks to this result, we know that there’s a deep symmetry between different flavors of computation that might not otherwise be obvious. There are some caveats, though: in particular, the idealized version of a Turing machine assumes infinite memory.
Now, to answer your actual question, the branch of mathematics that this comes from is called computability theory, and it’s related to the study of mathematical logic and formal languages. The textbook I got most of my understanding of it from is Hopcroft, Motwani, and Ullman’s Introduction to Automata Theory, Languages, and Computation, although it might be worth looking through the “Best Textbooks on Every Subject” thread to see if there’s a consensus on another.
Curious, does “memory space” mean something more than just “memory”?
Just a little more specific. Some people may hear “memory” and associate it with, say, the duration of their memory rather than how many can be physically held. For example when a human is said to have a ‘really good memory’ we don’t tend to be trying to make a claim about the theoretical maximum amount of stuff they could remember.
No, although either or both might be a little misleading depending on what connotations you attach to it: an idealized Turing machine stores all its state on a rewritable tape (or several tapes, but that’s equivalent to the one-tape version) of symbols that’s infinite in both directions. You could think of that as analogous to both memory and disk, or to whatever the system you’re actually working with uses for storage.
Right, I know that. Was just curious why the extra verbiage in a post meant to explain something.
Because it’s late and I’m long-winded. I’ll delete it.