OK, so this is helpful, but if I understood you correctly, I think it’s assuming too much about the setup. For #1, in the examples we’re discussing, the states of the object aren’t predictably changing in complex ways—just that it will change “states” in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don’t vary in some way that does any work—and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping.
And the difference between that situation and the CNS is that we know he neural circuitry is doing work—the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines.
Let’s consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they’re meditating in a way that we can agree they’re having a conscious experience. Let’s pick a 1 second window and consider the CNS and local environment of the meditator during that window.
(I don’t know much physics, so this might need adjustment): Let’s say we had a reasonable guess of an “initial wavefunction” of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.
Now let’s look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is “simple”. But I can’t convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we’ve designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).
Let’s say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.
Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there’s a sense in which the sand swirling sequence is really emulating the meditator’s conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)
What I can’t seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally “human-friendly”. I don’t know how this can be replaced with a way that avoids needing a sentient being.
That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed.
For their proposals, it seems like the computational process is more like: 1. Extract a specific string of 1s and zeros from the sandstorm’s initial position, and another from it’s final position, with the some length as the length of the full description of the mind. 2. Calculate the bitwise sum of the initial mind state and the initial sand position. 3. Calculate the bitwise sum of the final mind state and the final sand position. 4. Take the output of state 2 and replace it with the output of state 3. 5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you’re just copying answers.
I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson’s paper first; I’ve read many of his other papers and blogs, including on the topic of free will, and really enjoy them).
OK, so this is helpful, but if I understood you correctly, I think it’s assuming too much about the setup. For #1, in the examples we’re discussing, the states of the object aren’t predictably changing in complex ways—just that it will change “states” in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don’t vary in some way that does any work—and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping.
And the difference between that situation and the CNS is that we know he neural circuitry is doing work—the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines.
Okay, let me know if this is a fair assessment:
Let’s consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they’re meditating in a way that we can agree they’re having a conscious experience. Let’s pick a 1 second window and consider the CNS and local environment of the meditator during that window.
(I don’t know much physics, so this might need adjustment): Let’s say we had a reasonable guess of an “initial wavefunction” of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and this deserves to be fought. But supposing it can be done, and we had a sufficiently powerful supercomputer, we could encode and simulate possible trajectories of this CNS over a one second window. CF suggests that there is a genuine conscious experience there.
Now let’s look at how one such simulation is encoded, which we could view as a long string of 0s and 1s. The tricky part (I think) is as follows: we have a way of understanding these 0s and 1s as particles and the process of interpreting these as states of particles is “simple”. But I can’t convert that understanding rigorously into the length of a program because all programs can do is convert one encoding into another (and presumably we’ve designed this encoding to be as straightforward-to-interpret as possible, instead of as short as possible).
Let’s say I have sand swirling around in a sandstorm. I likewise pick a section of this, and do something like the above to encode it as a sequence of integers in a manner that is as easy for a human to interpret as possible, and makes no effort to be compressed.
Now I can ask for the K-complexity of the CNS string, given the sand swirling sequence as input (i.e. the size of the smallest turing machine that prints the CNS string with the sand swirling sequence on its input tape). Divide this by the K-complexity of the CNS string. If the resulting fraction is close to zero, maybe there’s a sense in which the sand swirling sequence is really emulating the meditator’s conscious experience. But this is ratio is probably closer to 1. (By the way, the choice of using K-complexity is itself suspect, but it can be swapped with other notions of complexity.)
What I can’t seem to shake is that it seems to be fundamentally important that we have some notion of 0s and 1s encoding things in a manner that is optimally “human-friendly”. I don’t know how this can be replaced with a way that avoids needing a sentient being.
That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed.
For their proposals, it seems like the computational process is more like:
1. Extract a specific string of 1s and zeros from the sandstorm’s initial position, and another from it’s final position, with the some length as the length of the full description of the mind.
2. Calculate the bitwise sum of the initial mind state and the initial sand position.
3. Calculate the bitwise sum of the final mind state and the final sand position.
4. Take the output of state 2 and replace it with the output of state 3.
5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you’re just copying answers.
I’m going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle’s Wall, Johnston’s popcorn, and Putnam’s rock), and when that’s eventually done I might report back here or make a new post if this thread is long dead by then
You should also read the relevant sequence about dissolving the problem of free will: https://www.lesswrong.com/s/p3TndjYbdYaiWwm9x
I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson’s paper first; I’ve read many of his other papers and blogs, including on the topic of free will, and really enjoy them).