This post seems to be mostly talking about the questions of “what is intelligence” and “what is meaning”, while implying that answering that question would also help figure out the answer to “what’s the minimum requirement for the subjective experience of happiness”.
But it doesn’t seem at all obvious to me that these are the same question!
Research on the requirements for subjective experience doesn’t, as far as I know, say anything about whether something is intelligent or having meaning. E.g. Thomas Metzinger has argued that a neural representation becomes a phenomenally conscious representation if it’s globally available for the system (for deliberately guided attention, cognitive reference, and control of action), activated within a window of presence (subjectively perceived as being experienced now), bound into a global situational context (experienced as being part of a world), etc. Some researchers have focused on specific parts of the criteria, like the global availability.
Now granted, if your thesis is that a hedonium or mind crime algorithm seems to require some minimum amount of complexity which might be greater than some naive expectations, then the work I’ve mentioned would also support that. But that doesn’t seem to me like it would prevent hedonium scenarios—it would just put some upper bound on how dense with pleasure we can make the universe. And I don’t know of any obvious reasons for why the required level of complexity for experiencing subjective pleasure would necessarily be even at the human level: probably an animal-level intelligence could be just as happy.
Later on in the post you say:
But those are minor quibbles: the main problem is whether the sense of identity of the agent can be grounded sufficiently well, while remaining accurate if the agent is run trillions upon trillions of times. Are these genuine life experience? What if the agent learns something new during that period—this seems to stretch the meaning of “learning something new”, possibly breaking it.
Other issues crop up—suppose a lot of my identity is tied up with the idea I could explore space around me? In a hedonium world, this would be impossible, as the space (physical and virtual) is taken up by other copies being run in limited virtual environments. Remember it’s not enough to say “the agent could explore space”; if there is no possibility for the agent to do so “could explore” can be syntactically replaced with “couldn’t explore” without affecting the algorithm, just its meaning.
But now you seem to be talking about something else than in the beginning of the post. At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like “would it be possible to take all currently living humans and make them maximally happy while preserving their identity”. This is a very different scenario from just the plain hedonium scenario.
probably an animal-level intelligence could be just as happy.
In that case it’s not a human-comparable intelligent agent experiencing happiness. So I’d argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I’m arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like “would it be possible to take all currently living humans and make them maximally happy while preserving their identity”. This is a very different scenario from just the plain hedonium scenario.
That’s the point. I don’t think that the first setup would count as a happy state, if copied in the way described.
Now replying to actual meat of the post:
This post seems to be mostly talking about the questions of “what is intelligence” and “what is meaning”, while implying that answering that question would also help figure out the answer to “what’s the minimum requirement for the subjective experience of happiness”.
But it doesn’t seem at all obvious to me that these are the same question!
Research on the requirements for subjective experience doesn’t, as far as I know, say anything about whether something is intelligent or having meaning. E.g. Thomas Metzinger has argued that a neural representation becomes a phenomenally conscious representation if it’s globally available for the system (for deliberately guided attention, cognitive reference, and control of action), activated within a window of presence (subjectively perceived as being experienced now), bound into a global situational context (experienced as being part of a world), etc. Some researchers have focused on specific parts of the criteria, like the global availability.
Now granted, if your thesis is that a hedonium or mind crime algorithm seems to require some minimum amount of complexity which might be greater than some naive expectations, then the work I’ve mentioned would also support that. But that doesn’t seem to me like it would prevent hedonium scenarios—it would just put some upper bound on how dense with pleasure we can make the universe. And I don’t know of any obvious reasons for why the required level of complexity for experiencing subjective pleasure would necessarily be even at the human level: probably an animal-level intelligence could be just as happy.
Later on in the post you say:
But now you seem to be talking about something else than in the beginning of the post. At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like “would it be possible to take all currently living humans and make them maximally happy while preserving their identity”. This is a very different scenario from just the plain hedonium scenario.
In that case it’s not a human-comparable intelligent agent experiencing happiness. So I’d argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I’m arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
That’s the point. I don’t think that the first setup would count as a happy state, if copied in the way described.