I think that only makes sense to do if those minds are literally “less conscious” than other minds though. Otherwise why would I care less about them because they’re more complex?
It does make sense to me to talk about “speed” and “number of observer moments” as part of moral weight, but “complexity of definition” to me only makes sense if those minds experience things differently than I do.
Description complexity is the natural generalization of “speed” and “number of observer moments” to infinite universes/arbitrary embeddings of minds in those universes. It manages to scale as (the log of) the density of copies of an entity, while avoiding giving all the measure to Boltzmann brains.
Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)
Here’s a (not particularly rigorous) philosophical argument in favour. The substrate on which a mind is running shouldn’t affect its moral status. So we should consider all computable mappings from the world to a mind as being ‘real’. On the other hand, we want the total “number” of observer-moments in a given world to be finite(otherwise we can’t compare the values of different worlds). This suggests that we should assign a ‘weight’ to different experiences, which must be exponentially decreasing in program length for the sum to converge.
But the question then becomes how you sample these minds you are talking to. Do you just go around literally speaking to them? Clearly this will miss a lot of minds. But you can’t use completely arbitrary ways of accessing them either, because then you might end up packing most of the ‘mind’ into your way of interfacing with them. Weighting by complexity is meant to provide a good way of sampling minds, that includes all computable patterns without attributing mind-fulness to noise.
(Just to clarify a bit, ‘complexity’ here is referring to the complexity of selecting a mind given the world, not the complexity of the mind itself. It’s meant to be a generalization of ‘number of copies’ and ‘exists/does not exist’, not a property inherent to the mind)
It seems like you can get quite a bit of data with minds that you can interface with? I think it’s true that you can’t sample the space of all possible minds, but testing this hypothesis on just a few seems like high VoI.
What hypothesis would you be “testing”? What I’m proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.
If you mean that we should check if the minds we usually see in the world have low complexity, I think that already seems to be the case, in that we’re the end-result of a low-complexity process starting from simple conditions, and can be pinpointed in the world relatively simply.
What hypothesis would you be “testing”? What I’m proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.
I mean, I’m saying get minds with many different complexities, figure out a way to communicate with them, and ask them about their experience.
That would help to figure out if complexity is indeed correlated with observer moments.
But how you test this feels different from the question of whether or not it’s true.
I think we’re talking about different things. I’m talking about how you would locate minds in an arbitrary computational structure(and how to count them), you’re talking about determining what’s valuable about a mind once we’ve found it.
I think that only makes sense to do if those minds are literally “less conscious” than other minds though. Otherwise why would I care less about them because they’re more complex?
It does make sense to me to talk about “speed” and “number of observer moments” as part of moral weight, but “complexity of definition” to me only makes sense if those minds experience things differently than I do.
Description complexity is the natural generalization of “speed” and “number of observer moments” to infinite universes/arbitrary embeddings of minds in those universes. It manages to scale as (the log of) the density of copies of an entity, while avoiding giving all the measure to Boltzmann brains.
Again this seems to be an empirical question that you can’t just assume.
Is it an empirical question? It seems more like a philosophical question(what evidence could we see that would change our minds?)
Here’s a (not particularly rigorous) philosophical argument in favour. The substrate on which a mind is running shouldn’t affect its moral status. So we should consider all computable mappings from the world to a mind as being ‘real’. On the other hand, we want the total “number” of observer-moments in a given world to be finite(otherwise we can’t compare the values of different worlds). This suggests that we should assign a ‘weight’ to different experiences, which must be exponentially decreasing in program length for the sum to converge.
We could talk to different minds and have them describe their experience, and then compare the number of observer moments to their complexity.
But the question then becomes how you sample these minds you are talking to. Do you just go around literally speaking to them? Clearly this will miss a lot of minds. But you can’t use completely arbitrary ways of accessing them either, because then you might end up packing most of the ‘mind’ into your way of interfacing with them. Weighting by complexity is meant to provide a good way of sampling minds, that includes all computable patterns without attributing mind-fulness to noise.
(Just to clarify a bit, ‘complexity’ here is referring to the complexity of selecting a mind given the world, not the complexity of the mind itself. It’s meant to be a generalization of ‘number of copies’ and ‘exists/does not exist’, not a property inherent to the mind)
It seems like you can get quite a bit of data with minds that you can interface with? I think it’s true that you can’t sample the space of all possible minds, but testing this hypothesis on just a few seems like high VoI.
What hypothesis would you be “testing”? What I’m proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity.
If you mean that we should check if the minds we usually see in the world have low complexity, I think that already seems to be the case, in that we’re the end-result of a low-complexity process starting from simple conditions, and can be pinpointed in the world relatively simply.
I mean, I’m saying get minds with many different complexities, figure out a way to communicate with them, and ask them about their experience.
That would help to figure out if complexity is indeed correlated with observer moments.
But how you test this feels different from the question of whether or not it’s true.
I think we’re talking about different things. I’m talking about how you would locate minds in an arbitrary computational structure(and how to count them), you’re talking about determining what’s valuable about a mind once we’ve found it.