Somebody that insists that 1 should be coded as a red light and 0 should be encoded as a green light would claim that the original circuit is not an adder. A view on the topic that recognises that such number-color associations are arbitrary and thus must not be the turning points of the arguments would classify a red-adder and green-adder to be doing the same calculation.
What kind of desirability condition does turing machine fail to not be the correct mathematical representantion of natural computation? Turing machines are very good at describing symbol dances and they don’t need to have meanings to be described. It seems to me that if somebody desires something beyond that then the additional bit can’t really be about the computation. If you are not satisfied that the word “cat” is made out of c, a and t then you are asking about felines or something rather than anything about words. In the same way if you have “something that brrrrs this way” and zigging that way or zagging that way is not a thing that satisfies you then you are asking about something else than sequences of operations.
However, such a person would still have to admit that the mutual information between input and output bits is high. There is something inherently “computery” about that circuit even if you refuse its specific interpretation. It just computes a different function.
Great question. Hopefully soon I’ll write a longer post on exactly this topic, but for now you can look at this recent post, Beyond Kolmogorov and Shannon, by me and Alexander Gietelink Oldenziel that tries to explain what is lacking in Turing Machines. This intuition is also found in James Crutchfield’s work, e.g. here, and in the article by him in the external resources section in this post.
In short, the extra desirability condition is a mechanism to generate random bits. I think this is fundamental to computation “in the real world” (ie brains and other natural systems) because of the central role uncertainty plays in the functioning of such systems. But a complete explanation for why that is the case will have to wait for a longer post.
Admittedly, I am overloading the word “computation” here, since there is a very well developed mathematical framework for computation in terms of Turing Machines. But I can’t think of a better one at the moment, and I do think this more general sense of computation is what many biologists are pointing towards (neuroscientists in particular) when they use the word. Maybe I should call it “natural computation.”
Probabilistic Turing machines already exist. They are a standard extension of TMs that transition from state to state with arbitrary probabilities (not just 0 and 1) and can thus easily generate random bits.
Further, we know that deterministic TMs can do anything that probabilistic TMs can, albeit potentially less efficiently.
I suspect you’re not really rejecting TMs per se, but rather the entire theory of computability and complexity that is standard in CS, more specifically the Church-Turing thesis which is the foundation of the whole thing.
It would strike me that a natural way for turing machines to handle uncertainty would be to not assume that the input tape starts empty but can be anything (similar to an uninitialised variable).
Then you probably will have stuff like two machines are not equivalent based on having rules that never trigger with the all 0 start tape.
Turing machines are not inherently behaviorationist with computation. It could be argued that the core of the turing machine, the symbol replacement table, is inherently verb like and not subtantive like. And the machine is not a descrption of a bunch of steps but having one step being applied many times. I get how one could approach the thing by thinking that a machine is a series of the states of the tape and think that the machines are single-narrative focused. But a specification of a machine does not need to refer to a single tape state. Just the transition table alone without any reference to data is sufficient. Sure it makes up a canonical story when started on the trivial data of the empty tape. But this is not definitionally essential. It is not like cellular automata get confused on what to do if their field get noise mid execution (the rules of game of life leave no choice on how the life responds to a pixel that is miracliously created by outside forces)
I get an intuitive sense that this new concept deals with functions typically paying rent, and should a detail in function be different some kind of ill fate would befall. Which turns kind of into a an approach that detail of the function can be connected to the biological niche the entity is in and the incentive structure it pushes on. I do think ultimately it is going to be connected in that evolution is a blind force. While it makes fancy stuff, the stuff actually doesn’t have a design. So sure we probably can sort out the loadbearing parts form the non-loadbearing parts but I don’t think it ever really gets to “this bit has this purpose” level.
Somebody that insists that 1 should be coded as a red light and 0 should be encoded as a green light would claim that the original circuit is not an adder. A view on the topic that recognises that such number-color associations are arbitrary and thus must not be the turning points of the arguments would classify a red-adder and green-adder to be doing the same calculation.
What kind of desirability condition does turing machine fail to not be the correct mathematical representantion of natural computation? Turing machines are very good at describing symbol dances and they don’t need to have meanings to be described. It seems to me that if somebody desires something beyond that then the additional bit can’t really be about the computation. If you are not satisfied that the word “cat” is made out of c, a and t then you are asking about felines or something rather than anything about words. In the same way if you have “something that brrrrs this way” and zigging that way or zagging that way is not a thing that satisfies you then you are asking about something else than sequences of operations.
However, such a person would still have to admit that the mutual information between input and output bits is high. There is something inherently “computery” about that circuit even if you refuse its specific interpretation. It just computes a different function.
Great question. Hopefully soon I’ll write a longer post on exactly this topic, but for now you can look at this recent post, Beyond Kolmogorov and Shannon, by me and Alexander Gietelink Oldenziel that tries to explain what is lacking in Turing Machines. This intuition is also found in James Crutchfield’s work, e.g. here, and in the article by him in the external resources section in this post.
In short, the extra desirability condition is a mechanism to generate random bits. I think this is fundamental to computation “in the real world” (ie brains and other natural systems) because of the central role uncertainty plays in the functioning of such systems. But a complete explanation for why that is the case will have to wait for a longer post.
Admittedly, I am overloading the word “computation” here, since there is a very well developed mathematical framework for computation in terms of Turing Machines. But I can’t think of a better one at the moment, and I do think this more general sense of computation is what many biologists are pointing towards (neuroscientists in particular) when they use the word. Maybe I should call it “natural computation.”
Probabilistic Turing machines already exist. They are a standard extension of TMs that transition from state to state with arbitrary probabilities (not just 0 and 1) and can thus easily generate random bits.
Further, we know that deterministic TMs can do anything that probabilistic TMs can, albeit potentially less efficiently.
I suspect you’re not really rejecting TMs per se, but rather the entire theory of computability and complexity that is standard in CS, more specifically the Church-Turing thesis which is the foundation of the whole thing.
It would strike me that a natural way for turing machines to handle uncertainty would be to not assume that the input tape starts empty but can be anything (similar to an uninitialised variable).
Then you probably will have stuff like two machines are not equivalent based on having rules that never trigger with the all 0 start tape.
Turing machines are not inherently behaviorationist with computation. It could be argued that the core of the turing machine, the symbol replacement table, is inherently verb like and not subtantive like. And the machine is not a descrption of a bunch of steps but having one step being applied many times. I get how one could approach the thing by thinking that a machine is a series of the states of the tape and think that the machines are single-narrative focused. But a specification of a machine does not need to refer to a single tape state. Just the transition table alone without any reference to data is sufficient. Sure it makes up a canonical story when started on the trivial data of the empty tape. But this is not definitionally essential. It is not like cellular automata get confused on what to do if their field get noise mid execution (the rules of game of life leave no choice on how the life responds to a pixel that is miracliously created by outside forces)
I get an intuitive sense that this new concept deals with functions typically paying rent, and should a detail in function be different some kind of ill fate would befall. Which turns kind of into a an approach that detail of the function can be connected to the biological niche the entity is in and the incentive structure it pushes on. I do think ultimately it is going to be connected in that evolution is a blind force. While it makes fancy stuff, the stuff actually doesn’t have a design. So sure we probably can sort out the loadbearing parts form the non-loadbearing parts but I don’t think it ever really gets to “this bit has this purpose” level.
More importantly, Turing machines can emulate all machines that aren’t hyper computers, so they can emulate stochastic or random Turing machines.