Happiness, as a state of mind in humans, seems less to me about how strong the “orgasms” are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of ‘human’ and ‘happiness’ to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people’s intuitions about the Felix and Wireheading problems.
Happiness, as a state of mind in humans, seems less to me about how strong the “orgasms” are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of ‘human’ and ‘happiness’ to a computer)?
I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people’s intuitions about the Felix and Wireheading problems.