And my proposal is that maybe it makes sense to talk in terms of something like reference frames. Sure, there’s some reference frame where you could map between grains of sand and neurons, but it’s a crazy reference frame and not one that we care about.
I mean, from that reference frame, does that consciousness feel pain? If so, why do we not care about it? It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism). Maybe we want to tile the universe in such a way that there more infinitely countable pleasure patterns than pain patterns, or something.
And how does this relate back to realities? Are we saying that the sand operates in separate reality?
It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism).
For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.
My sense is that reference frame for you is something like “how externally similar is this entity to me” whereas for me the thing that matters is “How similar internally is this consciousness to my consciousness.” Which, if the computational theory of consciousness is true, the answer is “many consciousnesses are very similar.”
Obviously this is at the level of “not even a straw man” since you’re gesturing at vague intuitions, but based on our discussion so far this is as close as I can point to a crux.
Hmm, it’s not so much about how similar it is to me as it is like, whether it’s on the same plane of existence.
I mean, I guess that’s a certain kind of similarity. But I’m willing to impute moral worth to very alien kinds of consciousness, as long as it actually “makes sense” to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.
Well, Hamlet doesn’t really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
To me this is simply empirical. Is the computational theory of consciousness true without reservation? Then if the computation exists in our universe, the consciousness exists. Perhaps it’s only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but
I mean, from that reference frame, does that consciousness feel pain? If so, why do we not care about it? It seems to me like when it comes to morality, the thing that matters is the reference frame of the consciousness, and not our reference frame (I think some similar argument applies to longtermism). Maybe we want to tile the universe in such a way that there more infinitely countable pleasure patterns than pain patterns, or something.
And how does this relate back to realities? Are we saying that the sand operates in separate reality?
For the way I mean reference frame, I only care about my reference frame. (Or maybe I care about other frames in proportion to how much they align with mine.) Note that this is not the same thing as egoism.
How do you define reference frame?
I don’t have a good answer for this. I’m kinda still at the vague intuition stage rather than clear theory stage.
My sense is that reference frame for you is something like “how externally similar is this entity to me” whereas for me the thing that matters is “How similar internally is this consciousness to my consciousness.” Which, if the computational theory of consciousness is true, the answer is “many consciousnesses are very similar.”
Obviously this is at the level of “not even a straw man” since you’re gesturing at vague intuitions, but based on our discussion so far this is as close as I can point to a crux.
Hmm, it’s not so much about how similar it is to me as it is like, whether it’s on the same plane of existence.
I mean, I guess that’s a certain kind of similarity. But I’m willing to impute moral worth to very alien kinds of consciousness, as long as it actually “makes sense” to call them a consciousness. The making sense part is the key issue though, and a bit underspecified.
Here’s an analogy—is Hamlet conscious?
Well, Hamlet doesn’t really exist in our universe, so my plan for now is to not consider him a consciousness worth caring about. But if you start to deal with harder cases, whether it exists in our universe becomes a trickier question.
To me this is simply empirical. Is the computational theory of consciousness true without reservation? Then if the computation exists in our universe, the consciousness exists. Perhaps it’s only partially true, and more complex computations, or computations that take longer to run, have less of a sense of consciousness, and therefore it exists, but