The main thing I got out of reading Bostrom’s Deep Utopia is a better appreciation of this “meaning of life” thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life.
The book’s premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you’d never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you’re into learning, just ask! And similarly for any psychological state you’re thinking of working towards.
So, in that regime, it’s effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything’s heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If you think it’s important to be the first to discover a major theorem or be the individual who counterfactually helped someone, living in a posthuman utopia could make things harder in these respects, not easier. The robots can always leave you a preserve of unexplored math or unresolved evil… but this defeats the purpose of those values. It’s not practical benevolence if you had to ask for the danger to be left in place; it’s not a pioneering scientific discovery if the AI had to carefully avoid spoiling it for you.
Meaning is supposed to be one of these values: not a purely hedonic value, and not a value dealing only in your psychological states. A further value about the objective state of the world and your place in relation to it, wherein you do something practically significant by your lights. If that last bit can be construed as something having to do with your local patch of posthuman culture, then there can be plenty of meaning in the postinstrumental utopia! If that last bit is inextricably about your global, counterfactual practical importance by your lights, then you’ll have to live with all your “localistic” values satisfied but meaning mostly absent.
It helps to see this meaning thing if you frame it alongside all the other objectivistic “stretch goal” values you might have. Above and beyond your hedonic values, you might also think it good for you and others to have objectively interesting lives, accomplished and fulfilled lives, and consumingly purposeful lives. Meaning is one of these values, where above and beyond the joyful, rich experiences of posthuman life, you also want to play a significant practical role in the world. We might or might not be able to have lots of objective meaning in the AI utopia, depending on how objectivistic meaningfulness by your lights ends up being.
Considerations that in today’s world are rightly dismissed as frivolous may well, once more pressing problems have been resolved, emerge as increasingly important [remaining] lodestars… We could and should then allow ourselves to become sensitized to fainter, subtler, less tangible and less determinate moral and quasi-moral demands, aesthetic impingings, and meaning-related desirables. Such recalibration will, I believe, enable us to discern a lush normative structure in the new realm that we will find ourselves in—revealing a universe iridescent with values that are insensible to us in our current numb and stupefied condition (pp. 318-9).
Many who believe in God derive meaning, despite God theoretically being able to do anything they can do but better, from the fact that He chose not to do the tasks they are good at, and left them tasks to try to accomplish. Its common for such people to believe that this meaning would disappear if God disappeared, but whenever such a person does come to no longer believe in God, they often continue to see meaning in their life[1].
Now atheists worry about building God because it may destroy all meaning to our actions. I expect we’ll adapt.
(edit: That is to say, I don’t think you’ve adequately described what “meaning of life” is if you’re worried about it going away in the situation you describe)
If anything, they’re more right than wrong, there has been much written about the “meaning crisis” we’re in, possibly attributable to greater levels of atheism.
I’m pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can’t carry those values through seems like a pretty shallow imitation of a utopia.
There won’t be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I’ll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?
The tricky part is, on the margin I would probably use various shortcuts, and it’s not clear where those shortcuts end short of just getting knowledge beamed into my head.
I already use LLMs to tell me facts, explain things I’m unfamiliar with, handle tedious calculations/coding, generate simulated data/brainstorming and summarize things. Not much, because LLMs are pretty bad, but I do use them for this and I would use them more on the margin.
The concept of “the meaning of life” still seems like a category error to me. It’s an attempt to apply a system of categorization used for tools, one in which they are categorized by the purpose for which they are used, to something that isn’t a tool: a human life. It’s a holdover from theistic worldviews in which God created humans for some unknown purpose.
The lesson I draw instead from the knowledge-uploading thought experiment—where having knowledge instantly zapped into your head seems less worthwhile acquiring it more slowly yourself—is that to some extent, human values simply are masochistic. Hedonic maximization is not what most people want, even with all else being equal. This goes beyond simply valuing the pride of accomplishing difficult tasks, as such as the sense of accomplishment one would get from studying on one’s own, above other forms of pleasure. In the setting of this thought experiment, if you wanted the sense of accomplishment, you could get that zapped into your brain too, but much like getting knowledge zapped into your brain instead of studying yourself, automatically getting a sense of accomplishment would be of lesser value. The suffering of studying for yourself is part of what makes us evaluate it as worthwhile.
The main thing I got out of reading Bostrom’s Deep Utopia is a better appreciation of this “meaning of life” thing. I had never really understood what people meant by this, and always just rounded it off to people using lofty words for their given projects in life.
The book’s premise is that, after the aligned singularity, the robots will not just be better at doing all your work but also be better at doing all your leisure for you. E.g., you’d never study for fun in posthuman utopia, because you could instead just ask the local benevolent god to painlessly, seamlessly put all that wisdom in your head. In that regime, studying with books and problems for the purpose of learning and accomplishment is just masochism. If you’re into learning, just ask! And similarly for any psychological state you’re thinking of working towards.
So, in that regime, it’s effortless to get a hedonically optimal world, without any unendorsed suffering and with all the happiness anyone could want. Those things can just be put into everyone and everything’s heads directly—again, by the local benevolent-god authority. The only challenging values to satisfy are those that deal with being practically useful. If you think it’s important to be the first to discover a major theorem or be the individual who counterfactually helped someone, living in a posthuman utopia could make things harder in these respects, not easier. The robots can always leave you a preserve of unexplored math or unresolved evil… but this defeats the purpose of those values. It’s not practical benevolence if you had to ask for the danger to be left in place; it’s not a pioneering scientific discovery if the AI had to carefully avoid spoiling it for you.
Meaning is supposed to be one of these values: not a purely hedonic value, and not a value dealing only in your psychological states. A further value about the objective state of the world and your place in relation to it, wherein you do something practically significant by your lights. If that last bit can be construed as something having to do with your local patch of posthuman culture, then there can be plenty of meaning in the postinstrumental utopia! If that last bit is inextricably about your global, counterfactual practical importance by your lights, then you’ll have to live with all your “localistic” values satisfied but meaning mostly absent.
It helps to see this meaning thing if you frame it alongside all the other objectivistic “stretch goal” values you might have. Above and beyond your hedonic values, you might also think it good for you and others to have objectively interesting lives, accomplished and fulfilled lives, and consumingly purposeful lives. Meaning is one of these values, where above and beyond the joyful, rich experiences of posthuman life, you also want to play a significant practical role in the world. We might or might not be able to have lots of objective meaning in the AI utopia, depending on how objectivistic meaningfulness by your lights ends up being.
Many who believe in God derive meaning, despite God theoretically being able to do anything they can do but better, from the fact that He chose not to do the tasks they are good at, and left them tasks to try to accomplish. Its common for such people to believe that this meaning would disappear if God disappeared, but whenever such a person does come to no longer believe in God, they often continue to see meaning in their life[1].
Now atheists worry about building God because it may destroy all meaning to our actions. I expect we’ll adapt.
(edit: That is to say, I don’t think you’ve adequately described what “meaning of life” is if you’re worried about it going away in the situation you describe)
If anything, they’re more right than wrong, there has been much written about the “meaning crisis” we’re in, possibly attributable to greater levels of atheism.
I’m pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can’t carry those values through seems like a pretty shallow imitation of a utopia.
There won’t be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I’ll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?
The tricky part is, on the margin I would probably use various shortcuts, and it’s not clear where those shortcuts end short of just getting knowledge beamed into my head.
I already use LLMs to tell me facts, explain things I’m unfamiliar with, handle tedious calculations/coding, generate simulated data/brainstorming and summarize things. Not much, because LLMs are pretty bad, but I do use them for this and I would use them more on the margin.
The concept of “the meaning of life” still seems like a category error to me. It’s an attempt to apply a system of categorization used for tools, one in which they are categorized by the purpose for which they are used, to something that isn’t a tool: a human life. It’s a holdover from theistic worldviews in which God created humans for some unknown purpose.
The lesson I draw instead from the knowledge-uploading thought experiment—where having knowledge instantly zapped into your head seems less worthwhile acquiring it more slowly yourself—is that to some extent, human values simply are masochistic. Hedonic maximization is not what most people want, even with all else being equal. This goes beyond simply valuing the pride of accomplishing difficult tasks, as such as the sense of accomplishment one would get from studying on one’s own, above other forms of pleasure. In the setting of this thought experiment, if you wanted the sense of accomplishment, you could get that zapped into your brain too, but much like getting knowledge zapped into your brain instead of studying yourself, automatically getting a sense of accomplishment would be of lesser value. The suffering of studying for yourself is part of what makes us evaluate it as worthwhile.