this is an old question in philosophy of computation: what is it that makes a physical computation about anything in particular.
And it’s one that’s overhyped, but actually not that complicated.
A computation is “about” something else if and to the extent that there exists mutual information between the computation and the something else. Old thread on the matter.
Does observing the results of a physical process tell you something about the result of the computation 2+3? Then it’s an implementation of the addition of 2 and 3. Does it consistently tell you something about addition problems in general? Then it’s an implementation of addition.
This doesn’t fall into the trap of “joke interretations” where e.g. you apply complicated, convoluted transformations to molecular motions to hammer them into a mapping to addition. The reason is that by applying such a complicated (and probably ever-expanding) interpretation, the physical process is no longer telling you something about the answer; rather, the source of the output, by means of specifying the convoluted transformation, is you, and every result originates in you, not the physical process.
A computation is “about” something else if and to the extent that there exists mutual information between the computation and the something else.
Mutual information is defined for two random variables, and random variables are mappings from a common sample space to the variables’ domains. What are the mappings for two “things”? Mutual information doesn’t just “exist”, it is given by mappings which have to be somehow specified, and which can in general be specified to yield an arbitrary result.
This doesn’t fall into the trap of “joke interrelations” where e.g. you apply complicated, convoluted transformations to molecular motions to hammer them into a mapping to addition. The reason is that by applying such a complicated (and probably ever-expanding) interpretation, the physical process is no longer telling you something about the answer; rather, the source of the output, by means of specifying the convoluted transformation, is you, and every result originates in you, not the physical process.
When you distinguish between the mappings “originating” in the interpreter versus in the “physical process itself”, you are judging the relevance of output of mutual information calculation in the same motion (“no true Scotsman”). Mutual information doesn’t compute your answer, deciding whether the mapping came from an approved source does.
Mutual information is defined for two random variables, and random variables are mappings from a common sample space to the variables’ domains. What are the mappings for two “things”? Mutual information doesn’t just “exist”, it is given by mappings which have to be somehow specified, and which can in general be specified to yield an arbitrary result.
I wasn’t as precise as I should have been. By “mutual information”, I mean “mutual information conditional on yourself”. (Normally, “yourself” is part of the background knowledge predicating any probability and not explicitly represented.) So, as per the rest of my comment, the kind of mutual information I meant is well defined here: Physical process R implements computation C if and to the extent that, given yourself, learning R tells you something about C.
Yes, this has the counterintuitive result that the existence of a computation in a process is observer-dependent (not unlike every other physical law).
When you distinguish between the mappings “originating” in the interpreter versus in the “physical process itself”, you are judging the relevance of output of mutual information calculation in the same motion (“no true Scotsman”). Mutual information doesn’t compute your answer, deciding whether the mapping came from an approved source does.
No, mutual information is still the deciding factor. As per my above remark, if the source of the computation is really you, by means your ever-more-complex, carefully-designed mapping, then
P(C|self) = P(C|self,R)
i.e., learning about the physical process R didn’t change your beliefs about C. So, conditioning on yourself, there is no mutual information between C and R.
If you are the real source of the computation, that’s one reason the equality above can hold, but not the only reason.
I wasn’t as precise as I should have been. By “mutual information”, I mean “mutual information conditional on yourself”. (Normally, “yourself” is part of the background knowledge predicating any probability and not explicitly represented.) So, as per the rest of my comment, the kind of mutual information I meant is well defined here: Physical process R implements computation C if and to the extent that, given yourself, learning R tells you something about C.
Vague and doesn’t seem relevant. What is the sample space, what are the mappings? Conditioning means restricting to a subset of the sample space, and seeing how the mappings from the probability measure defined on it redraw the probability distributions on the variables’ domains. You still need those mappings, it’s what relates different variables to each other.
And it’s one that’s overhyped, but actually not that complicated.
A computation is “about” something else if and to the extent that there exists mutual information between the computation and the something else. Old thread on the matter.
Does observing the results of a physical process tell you something about the result of the computation 2+3? Then it’s an implementation of the addition of 2 and 3. Does it consistently tell you something about addition problems in general? Then it’s an implementation of addition.
This doesn’t fall into the trap of “joke interretations” where e.g. you apply complicated, convoluted transformations to molecular motions to hammer them into a mapping to addition. The reason is that by applying such a complicated (and probably ever-expanding) interpretation, the physical process is no longer telling you something about the answer; rather, the source of the output, by means of specifying the convoluted transformation, is you, and every result originates in you, not the physical process.
Mutual information is defined for two random variables, and random variables are mappings from a common sample space to the variables’ domains. What are the mappings for two “things”? Mutual information doesn’t just “exist”, it is given by mappings which have to be somehow specified, and which can in general be specified to yield an arbitrary result.
When you distinguish between the mappings “originating” in the interpreter versus in the “physical process itself”, you are judging the relevance of output of mutual information calculation in the same motion (“no true Scotsman”). Mutual information doesn’t compute your answer, deciding whether the mapping came from an approved source does.
I wasn’t as precise as I should have been. By “mutual information”, I mean “mutual information conditional on yourself”. (Normally, “yourself” is part of the background knowledge predicating any probability and not explicitly represented.) So, as per the rest of my comment, the kind of mutual information I meant is well defined here: Physical process R implements computation C if and to the extent that, given yourself, learning R tells you something about C.
Yes, this has the counterintuitive result that the existence of a computation in a process is observer-dependent (not unlike every other physical law).
No, mutual information is still the deciding factor. As per my above remark, if the source of the computation is really you, by means your ever-more-complex, carefully-designed mapping, then
P(C|self) = P(C|self,R)
i.e., learning about the physical process R didn’t change your beliefs about C. So, conditioning on yourself, there is no mutual information between C and R.
If you are the real source of the computation, that’s one reason the equality above can hold, but not the only reason.
Vague and doesn’t seem relevant. What is the sample space, what are the mappings? Conditioning means restricting to a subset of the sample space, and seeing how the mappings from the probability measure defined on it redraw the probability distributions on the variables’ domains. You still need those mappings, it’s what relates different variables to each other.