probably an animal-level intelligence could be just as happy.
In that case it’s not a human-comparable intelligent agent experiencing happiness. So I’d argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I’m arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like “would it be possible to take all currently living humans and make them maximally happy while preserving their identity”. This is a very different scenario from just the plain hedonium scenario.
That’s the point. I don’t think that the first setup would count as a happy state, if copied in the way described.
In that case it’s not a human-comparable intelligent agent experiencing happiness. So I’d argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I’m arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
That’s the point. I don’t think that the first setup would count as a happy state, if copied in the way described.