I was thinking of the computational theory of consciousness as basically being the same thing as saying that consciousness could be substrate independent. (E.g. you could have conscious uploads.)
I think this then leads you to ask, “If consciousness is not specific to a substrate, and it’s just a pattern, how can we ever say that something does or does not exhibit the pattern? Can’t I arbitrarily map between objects and parts of the pattern, and say that something is isomorphic to consciousness, and therefore is conscious?”
And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there’s some reference frame where you could map between grains of sand and neurons, but it’s a crazy reference frame and not one that we care about.
I don’t have a well-developed theory here. But a few related ideas:
simplicity matters
evolution over time matters—maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
accessibility matters—I’m a bit hesitant on this one. I don’t want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there’s no way for us to interact with it, then maybe that’s less relevant.
Ahh I see. Yeah, I think that assigning moral weight to different properties of consciousness might be a good way forward here. But it still seems really weird that there are infinite consciousnesses operating at any given time, and makes me a bit suspicious of the computational theory of consciousness.
And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there’s some reference frame where you could map between grains of sand and neurons, but it’s a crazy reference frame and not one that we care about.
I mean, from that reference frame, does that consciousness feel pain? If so, why do we not care about it? It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism). Maybe we want to tile the universe in such a way that there more infinitely countable pleasure patterns than pain patterns, or something.
And how does this relate back to realities? Are we saying that the sand operates in separate reality?
It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism).
For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.
My sense is that reference frame for you is something like “how externally similar is this entity to me” whereas for me the thing that matters is “How similar internally is this consciousness to my consciousness.” Which, if the computational theory of consciousness is true, the answer is “many consciousnesses are very similar.”
Obviously this is at the level of “not even a straw man” since you’re gesturing at vague intuitions, but based on our discussion so far this is as close as I can point to a crux.
Hmm, it’s not so much about how similar it is to me as it is like, whether it’s on the same plane of existence.
I mean, I guess that’s a certain kind of similarity. But I’m willing to impute moral worth to very alien kinds of consciousness, as long as it actually “makes sense” to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.
Well, Hamlet doesn’t really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
To me this is simply empirical. Is the computational theory of consciousness true without reservation? Then if the computation exists in our universe, the consciousness exists. Perhaps it’s only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but
I was thinking of the computational theory of consciousness as basically being the same thing as saying that consciousness could be substrate independent. (E.g. you could have conscious uploads.)
I think this then leads you to ask, “If consciousness is not specific to a substrate, and it’s just a pattern, how can we ever say that something does or does not exhibit the pattern? Can’t I arbitrarily map between objects and parts of the pattern, and say that something is isomorphic to consciousness, and therefore is conscious?”
And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there’s some reference frame where you could map between grains of sand and neurons, but it’s a crazy reference frame and not one that we care about.
I don’t have a well-developed theory here. But a few related ideas:
simplicity matters
evolution over time matters—maybe you can map all the neurons in my head and their activations at a given moment in time to a bunch of grains of sand, but the mapping is going to fall apart at the next moment (unless you include some crazy updating rule, but that violates the simplicity requirement)
accessibility matters—I’m a bit hesitant on this one. I don’t want to say that someone with locked in syndrome is not conscious. But if some mathematical object that only exists in Tegmark V is conscious (according to the previous definitions), but there’s no way for us to interact with it, then maybe that’s less relevant.
Ahh I see. Yeah, I think that assigning moral weight to different properties of consciousness might be a good way forward here. But it still seems really weird that there are infinite consciousnesses operating at any given time, and makes me a bit suspicious of the computational theory of consciousness.
I mean, from that reference frame, does that consciousness feel pain? If so, why do we not care about it? It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism). Maybe we want to tile the universe in such a way that there more infinitely countable pleasure patterns than pain patterns, or something.
And how does this relate back to realities? Are we saying that the sand operates in separate reality?
For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.
How do you define reference frame?
I don’t have a good answer for this. I’m kinda still at the vague intuition stage rather than clear theory stage.
My sense is that reference frame for you is something like “how externally similar is this entity to me” whereas for me the thing that matters is “How similar internally is this consciousness to my consciousness.” Which, if the computational theory of consciousness is true, the answer is “many consciousnesses are very similar.”
Obviously this is at the level of “not even a straw man” since you’re gesturing at vague intuitions, but based on our discussion so far this is as close as I can point to a crux.
Hmm, it’s not so much about how similar it is to me as it is like, whether it’s on the same plane of existence.
I mean, I guess that’s a certain kind of similarity. But I’m willing to impute moral worth to very alien kinds of consciousness, as long as it actually “makes sense” to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.
Here’s an analogy—is Hamlet conscious?
Well, Hamlet doesn’t really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
To me this is simply empirical. Is the computational theory of consciousness true without reservation? Then if the computation exists in our universe, the consciousness exists. Perhaps it’s only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but