Lets say we have the capability to create living creatures and some bored scientist makes one that is relatively intelligent (enough to be considered a person in a meaningful, human definition), capable of language, requiring of little sustenance, capable of reproduction and completely and utterly happy except under to most terrible circumstances. Would the utilitarian view of the situation be to convert all usable resources to create habitats for these critters? Would the moral thing be to give the world over to them because they’re better at not making each others lives terrible/are happier for the same amount of resources?
I’m pretty new around here so please forgive the sheer newbishness of my question.
Child-like joyousness was what I was envisioning. The short story Paprika http://escapepod.org/2014/05/30/ep448-paprika/ ends with all humans long dead and the only remaining human creations are talking squirrels that, at least according to the story’s brief description, live completely care free and joyous lives. That doesn’t exactly seem feasible to me with living creatures having resource needs, but what if they lived in-silico rather than in meat? That would still have resource needs, but possibly far fewer.
I think I am unconvinced that the way humans… “work”, for lack of a better term, is the optimal one. As you said, people who are at the high end of the spectrum of intelligence don’t seem to be happy quite as often (not to suggest that I think people should be dumber, but being both intelligent and ecstatically happy most of the time without the use of life-shortening drugs would be nice). I was curious about what has been discussed on the subject already, but I guess I chose the wrong way to ask it.
We aren’t as good at being happy, but we might be better at improving the scope for future happiness. (E.g., maybe space colonization some day.)
This one becomes harder to justify if these critters are actually our intellectual superiors.
Other things matter (to us) besides happiness, and these critters’ lives don’t provide those things as well as ours do.
This answer may be irrelevant if you’re only interested in utilitarian arguments.
My moral values are approximately utilitarian, but I would say that what I care about when thinking in utilitarian terms isn’t happiness as such but those things of which happiness is a measure. (This doesn’t require me not to care about happiness; happiness is in fact one of the things that makes us happy. If I were suddenly made much less happy about everything then that fact itself would be a source of unhappiness for me from then on.)
Lets say we have the capability to create living creatures and some bored scientist makes one that is relatively intelligent (enough to be considered a person in a meaningful, human definition), capable of language, requiring of little sustenance, capable of reproduction and completely and utterly happy except under to most terrible circumstances. Would the utilitarian view of the situation be to convert all usable resources to create habitats for these critters? Would the moral thing be to give the world over to them because they’re better at not making each others lives terrible/are happier for the same amount of resources?
I’m pretty new around here so please forgive the sheer newbishness of my question.
What exactly do you mean with “utterly happy”? What’s the empiric test to measure whether those creatures are “utterly happy”?
The more I interact with people at the high end of that spectrum the less I think that humans optimize towards happiness.
Child-like joyousness was what I was envisioning. The short story Paprika http://escapepod.org/2014/05/30/ep448-paprika/ ends with all humans long dead and the only remaining human creations are talking squirrels that, at least according to the story’s brief description, live completely care free and joyous lives. That doesn’t exactly seem feasible to me with living creatures having resource needs, but what if they lived in-silico rather than in meat? That would still have resource needs, but possibly far fewer.
I think I am unconvinced that the way humans… “work”, for lack of a better term, is the optimal one. As you said, people who are at the high end of the spectrum of intelligence don’t seem to be happy quite as often (not to suggest that I think people should be dumber, but being both intelligent and ecstatically happy most of the time without the use of life-shortening drugs would be nice). I was curious about what has been discussed on the subject already, but I guess I chose the wrong way to ask it.
I don’t speak about a high spectrum of intelligence but of “fun” and how certain people feel like they experience too much of it.
The idea of being ecstatically happy most of the time might sound good in theory but I don’t believe that’s what people actually prefer.
See discussions of utility monsters. Don’t assume that many people here support pure utilitarianism.
Thanks for the link, and sorry for the presumption. The question occurred to me and this was the first place I thought to ask.
Some possible reasons for saying no:
We aren’t as good at being happy, but we might be better at improving the scope for future happiness. (E.g., maybe space colonization some day.)
This one becomes harder to justify if these critters are actually our intellectual superiors.
Other things matter (to us) besides happiness, and these critters’ lives don’t provide those things as well as ours do.
This answer may be irrelevant if you’re only interested in utilitarian arguments.
My moral values are approximately utilitarian, but I would say that what I care about when thinking in utilitarian terms isn’t happiness as such but those things of which happiness is a measure. (This doesn’t require me not to care about happiness; happiness is in fact one of the things that makes us happy. If I were suddenly made much less happy about everything then that fact itself would be a source of unhappiness for me from then on.)
It may be illuminating in this connection to read “Not for the sake of pleasure alone”.