I don’t have a well-developed theory here. But a few related ideas:
simplicity matters
evolution over time matters—maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
accessibility matters—I’m a bit hesitant on this one. I don’t want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there’s no way for us to interact with it, then maybe that’s less relevant.
Ahh I see. Yeah, I think that assigning moral weight to different properties of consciousness might be a good way forward here. But it still seems really weird that there are infinite consciousnesses operating at any given time, and makes me a bit suspicious of the computational theory of consciousness.
I don’t have a well-developed theory here. But a few related ideas:
simplicity matters
evolution over time matters—maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
accessibility matters—I’m a bit hesitant on this one. I don’t want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there’s no way for us to interact with it, then maybe that’s less relevant.
Ahh I see. Yeah, I think that assigning moral weight to different properties of consciousness might be a good way forward here. But it still seems really weird that there are infinite consciousnesses operating at any given time, and makes me a bit suspicious of the computational theory of consciousness.