If we’re talking about conditions that don’t correlate well with particular classes of material things, sure.
That’s rarely true of the conditions I (or anyone else I know) value in real life, but I freely grant that real life isn’t a good reference class for these sorts of considerations, so that’s not evidence of much.
Still, failing a specific argument to the contrary, I would expect whatever conditions humans maximally value (assuming there is a stable referent for that concept, which I tend to doubt, but a lot of people seem to believe strongly) to implicitly define a class of objects that optimally satisfies those conditions. Even variety, assuming that it’s not the trivial condition of variety wherein anything is good as long as it’s new.
Computronium actually raises even more questions. It seems that unless one values the accurate knowledge (even when epistemically indistinguishable from the false belief) that one is not a simulation, the ideal result would be to devote all available mass-energy to computronium running simulated humans in a simulated optimal environment.
If we’re talking about conditions that don’t correlate well with particular classes of material things, sure.
I think that needs a bit of refinement. Having lots of food correlates quite well with food. Yet no one here wants to tile the universe with white rice. Other people are a necessity for a social circle, but again few want to tile the universe with humans (okay, there are some here that could be caricatured that way).
I think we all want to eventually have the entire universe able to be physically controlled by humanity (and/or its FAI guardians), but tiling implies a uniformity that we won’t want. Certainly not on a local scale, and probably not on a macroscale either.
Computronium actually raises even more questions.
Right, which is why I borught it up as one of the few reasonable counterexamples. Still, it’s the programming makes a difference between heaven and hell.
If we’re talking about conditions that don’t correlate well with particular classes of material things, sure.
That’s rarely true of the conditions I (or anyone else I know) value in real life, but I freely grant that real life isn’t a good reference class for these sorts of considerations, so that’s not evidence of much.
Still, failing a specific argument to the contrary, I would expect whatever conditions humans maximally value (assuming there is a stable referent for that concept, which I tend to doubt, but a lot of people seem to believe strongly) to implicitly define a class of objects that optimally satisfies those conditions. Even variety, assuming that it’s not the trivial condition of variety wherein anything is good as long as it’s new.
Computronium actually raises even more questions. It seems that unless one values the accurate knowledge (even when epistemically indistinguishable from the false belief) that one is not a simulation, the ideal result would be to devote all available mass-energy to computronium running simulated humans in a simulated optimal environment.
I think that needs a bit of refinement. Having lots of food correlates quite well with food. Yet no one here wants to tile the universe with white rice. Other people are a necessity for a social circle, but again few want to tile the universe with humans (okay, there are some here that could be caricatured that way).
I think we all want to eventually have the entire universe able to be physically controlled by humanity (and/or its FAI guardians), but tiling implies a uniformity that we won’t want. Certainly not on a local scale, and probably not on a macroscale either.
Right, which is why I borught it up as one of the few reasonable counterexamples. Still, it’s the programming makes a difference between heaven and hell.