Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.
I think there are two separate matters:
I have a physical object that has a means to receive inputs and will do something based on those inputs. Suppose I now create two machines: one that takes 0s and 1s and converts it into something the object receives, and one that observes the actions of the physical object then spits out an output. Both of these machines operate in time that is simultaneously at most quadratic in the length of the input AND at most linear in the “run time” of the physical object. And both of these machines are “bijective”.
If I create a program that has the same input/outputs as the above configuration (which is highly non-unique, and can vary significantly based on the choice of machines), there is some sense in which the physical object “computes” this program. This is kind of weak since the input/output converting machines can do a lot to emulate different programs, but at least you’re getting things in a similar complexity class.
You have a central nervous system (CNS) which is currently having a “subjective experience”, whatever that means. It is true that your CNS can be viewed as the aforementioned physical object. And while it is also true that, in the previous framework, one would need a very long and complicated program, it also seems to be true that my subjective experience arises from just a specific sequence of inputs.
If we were to only consider how the physical object behaves with a few specific inputs, I think it’s difficult to eliminate any possibilities for what the object is computing. When I see thought experiments like Putnam’s rock, they make sense to me because we’re only looking at a specific computation, not a full input-output set.
Edit: @Davidmanheim I’ve read your reply and agree that I’ve slightly misinterpreted your post. I’ll think about if the above ideas can be salvaged from the angle of measuring information in a long but finite sequence (e.g. Kolmogorov complexity) and reply when I have time.
OK, so this is helpful, but if I understood you correctly, I think it’s assuming too much about the setup. For #1, in the examples we’re discussing, the states of the object aren’t predictably changing in complex ways—just that it will change “states” in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don’t vary in some way that does any work—and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping.
And the difference between that situation and the CNS is that we know he neural circuitry is doing work—the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines.
Let’s consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they’re meditating in a way that we can agree they’re having a conscious experience. Let’s pick a 1 second window and consider the CNS and local environment of the meditator during that window.
(I don’t know much physics, so this might need adjustment): Let’s say we had a reasonable guess of an “initial wavefunction” of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.
Now let’s look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is “simple”. But I can’t convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we’ve designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).
Let’s say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.
Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there’s a sense in which the sand swirling sequence is really emulating the meditator’s conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)
What I can’t seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally “human-friendly”. I don’t know how this can be replaced with a way that avoids needing a sentient being.
That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed.
For their proposals, it seems like the computational process is more like: 1. Extract a specific string of 1s and zeros from the sandstorm’s initial position, and another from it’s final position, with the some length as the length of the full description of the mind. 2. Calculate the bitwise sum of the initial mind state and the initial sand position. 3. Calculate the bitwise sum of the final mind state and the final sand position. 4. Take the output of state 2 and replace it with the output of state 3. 5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you’re just copying answers.
Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.
I think there are two separate matters:
I have a physical object that has a means to receive inputs and will do something based on those inputs. Suppose I now create two machines: one that takes 0s and 1s and converts it into something the object receives, and one that observes the actions of the physical object then spits out an output. Both of these machines operate in time that is simultaneously at most quadratic in the length of the input AND at most linear in the “run time” of the physical object. And both of these machines are “bijective”.
If I create a program that has the same input/outputs as the above configuration (which is highly non-unique, and can vary significantly based on the choice of machines), there is some sense in which the physical object “computes” this program. This is kind of weak since the input/output converting machines can do a lot to emulate different programs, but at least you’re getting things in a similar complexity class.
You have a central nervous system (CNS) which is currently having a “subjective experience”, whatever that means. It is true that your CNS can be viewed as the aforementioned physical object. And while it is also true that, in the previous framework, one would need a very long and complicated program, it also seems to be true that my subjective experience arises from just a specific sequence of inputs.
If we were to only consider how the physical object behaves with a few specific inputs, I think it’s difficult to eliminate any possibilities for what the object is computing. When I see thought experiments like Putnam’s rock, they make sense to me because we’re only looking at a specific computation, not a full input-output set.
Edit: @Davidmanheim I’ve read your reply and agree that I’ve slightly misinterpreted your post. I’ll think about if the above ideas can be salvaged from the angle of measuring information in a long but finite sequence (e.g. Kolmogorov complexity) and reply when I have time.
OK, so this is helpful, but if I understood you correctly, I think it’s assuming too much about the setup. For #1, in the examples we’re discussing, the states of the object aren’t predictably changing in complex ways—just that it will change “states” in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don’t vary in some way that does any work—and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping.
And the difference between that situation and the CNS is that we know he neural circuitry is doing work—the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines.
Okay, let me know if this is a fair assessment:
Let’s consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they’re meditating in a way that we can agree they’re having a conscious experience. Let’s pick a 1 second window and consider the CNS and local environment of the meditator during that window.
(I don’t know much physics, so this might need adjustment): Let’s say we had a reasonable guess of an “initial wavefunction” of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.
Now let’s look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is “simple”. But I can’t convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we’ve designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).
Let’s say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.
Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there’s a sense in which the sand swirling sequence is really emulating the meditator’s conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)
What I can’t seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally “human-friendly”. I don’t know how this can be replaced with a way that avoids needing a sentient being.
That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed.
For their proposals, it seems like the computational process is more like:
1. Extract a specific string of 1s and zeros from the sandstorm’s initial position, and another from it’s final position, with the some length as the length of the full description of the mind.
2. Calculate the bitwise sum of the initial mind state and the initial sand position.
3. Calculate the bitwise sum of the final mind state and the final sand position.
4. Take the output of state 2 and replace it with the output of state 3.
5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you’re just copying answers.
I’m going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle’s Wall, Johnston’s popcorn, and Putnam’s rock), and when that’s eventually done I might report back here or make a new post if this thread is long dead by then