Ask most people what they imagine a better life and a better world might be, and they will rarely imagine anything more than the present evils removed. Less disease, less starvation, less drudgery, less killing, less oppression. Their positive vision is merely the opposite of these: more health, more food, more fun, more love, more freedom.
When cranked up to transhuman levels, this looks like no ignorance, instant access to all knowledge, no stupidity, unlimited intelligence, no disease, unlimited lifespan, no technological limits, unlimited technological superpower, less environmental cramping, expansion across the universe.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
I got out of bed. Sleep? What need has a transhuman of sleep? I showered, unloaded the washing machine that had run overnight, ate breakfast. Surely these and a great deal more stand in the same relation to a transhuman life as the drudgery of a 13th century peasant does to my own. I am typing on a keyboard. A keyboard! How primitive! Later today I will have taiko practice. Practice? Surely we will download such skills, or build robots to do them for us? I value the physical exertion. Exertion? What need, when we are uploads using whatever physical apparatus we choose, which will always run flawlessly?
The vision usually looks like having machines to do our living for us, leaving us as mere epiphenomena of a world that runs itself. We might think that “we” are colonising the galaxy, while to any other species observing, we might just look like a madly expanding sphere of von Neumann machines, with no valuable personhood present. Such is the vision of Utopia that results from imagining the future as being the present, but better, extrapolated without bound.
The Fun Sequence (long version, short version) says a lot about what sort of thing makes for a genuine Utopia, but I don’t think it contains examples of a day in the life. Perhaps it cannot, any more than a 13th century peasant’s dreams could contain anything resembling the modern world. One attempt I saw, which I can’t now find, imagined (this is my interpretation of it, not the way it was presented) a future that amounted to better BDSM scenes. This strikes me as about as realistic as a million years of sex with catgirls.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
What I would want to do, or what I think I would do? I certainly would want to hold on my values, but I’m not yet sure which ones.
I don’t see how you can just crank these very specialized phenomena several orders of magnitude higher and still remain remotely human. That’s the point of the essay- we wind up as something we would view as monstrous today.
I’ve had existential crises thinking about such things. Stuff like living forever or having my brain upgraded beyond recognition scare me, for reasons I can’t quite put into words.
I’m comforted by the argument that it won’t happen overnight. We will probably gradually transition into such a world and it won’t feel so weird and shocking. And if we get it right, the AI will ask us what we want, and present us with arguments for and against our options, so we can decide what we actually want. Not just get stuck in a shitty future we wouldn’t want.
Ask most people what they imagine a better life and a better world might be, and they will rarely imagine anything more than the present evils removed. Less disease, less starvation, less drudgery, less killing, less oppression. Their positive vision is merely the opposite of these: more health, more food, more fun, more love, more freedom.
When cranked up to transhuman levels, this looks like no ignorance, instant access to all knowledge, no stupidity, unlimited intelligence, no disease, unlimited lifespan, no technological limits, unlimited technological superpower, less environmental cramping, expansion across the universe.
What will people do, when almost everything they currently do is driven by exactly those limits that it is the transhuman vision to eliminate? Think of everything you have done today—how much of it would a transhuman you in a transhuman world have done?
I got out of bed. Sleep? What need has a transhuman of sleep? I showered, unloaded the washing machine that had run overnight, ate breakfast. Surely these and a great deal more stand in the same relation to a transhuman life as the drudgery of a 13th century peasant does to my own. I am typing on a keyboard. A keyboard! How primitive! Later today I will have taiko practice. Practice? Surely we will download such skills, or build robots to do them for us? I value the physical exertion. Exertion? What need, when we are uploads using whatever physical apparatus we choose, which will always run flawlessly?
The vision usually looks like having machines to do our living for us, leaving us as mere epiphenomena of a world that runs itself. We might think that “we” are colonising the galaxy, while to any other species observing, we might just look like a madly expanding sphere of von Neumann machines, with no valuable personhood present. Such is the vision of Utopia that results from imagining the future as being the present, but better, extrapolated without bound.
The Fun Sequence (long version, short version) says a lot about what sort of thing makes for a genuine Utopia, but I don’t think it contains examples of a day in the life. Perhaps it cannot, any more than a 13th century peasant’s dreams could contain anything resembling the modern world. One attempt I saw, which I can’t now find, imagined (this is my interpretation of it, not the way it was presented) a future that amounted to better BDSM scenes. This strikes me as about as realistic as a million years of sex with catgirls.
What I would want to do, or what I think I would do? I certainly would want to hold on my values, but I’m not yet sure which ones.
I don’t see how you can just crank these very specialized phenomena several orders of magnitude higher and still remain remotely human. That’s the point of the essay- we wind up as something we would view as monstrous today.
I’ve had existential crises thinking about such things. Stuff like living forever or having my brain upgraded beyond recognition scare me, for reasons I can’t quite put into words.
I’m comforted by the argument that it won’t happen overnight. We will probably gradually transition into such a world and it won’t feel so weird and shocking. And if we get it right, the AI will ask us what we want, and present us with arguments for and against our options, so we can decide what we actually want. Not just get stuck in a shitty future we wouldn’t want.