“all” humans? like, maybe no, I expect a few would survive, but the future wouldn’t be human, it’d be whatever distorted things those humans turn into. My core take here is that humans generalize basically just as poorly as we expect AIs to, (maybe a little better, but on a log scale, not much), in terms of their preferences still pointing at the things even they thought they did given a huge increase in power. crown wearing the king, drug seeking behavior, luxury messing up people’s motivation, etc. if you solve “make an ai be entirely obedient to a single person”, then that person needs to be wise enough to not screw that up, and I trust exactly no one to even successfully use that situation to do what they want, nevermind what others around them want. For an evocative cariacature of the intuition here, see rick sanchez.
The vast majority of actual humans are already dead. The overwhelming majority of currently-living humans should expect 95%+ chance they’ll die in under a century.
If immortality is solved, it will only apply to “that distorted thing those humans turn into”. Note that this is something the stereotypical Victorian would understand completely—there may be biological similarities with today’s humans, but they’re culturally a different species.
I mean, we’re not going to the future without getting changed by it, agreed. but how quickly one has to figure out how to make good use of a big power jump seems like it has a big effect on how much risk the power jump carries for your ability to actually implement the preferences you’d have had if you didn’t rush yourself.
“all” humans? like, maybe no, I expect a few would survive, but the future wouldn’t be human, it’d be whatever distorted things those humans turn into. My core take here is that humans generalize basically just as poorly as we expect AIs to, (maybe a little better, but on a log scale, not much), in terms of their preferences still pointing at the things even they thought they did given a huge increase in power. crown wearing the king, drug seeking behavior, luxury messing up people’s motivation, etc. if you solve “make an ai be entirely obedient to a single person”, then that person needs to be wise enough to not screw that up, and I trust exactly no one to even successfully use that situation to do what they want, nevermind what others around them want. For an evocative cariacature of the intuition here, see rick sanchez.
The vast majority of actual humans are already dead. The overwhelming majority of currently-living humans should expect 95%+ chance they’ll die in under a century.
If immortality is solved, it will only apply to “that distorted thing those humans turn into”. Note that this is something the stereotypical Victorian would understand completely—there may be biological similarities with today’s humans, but they’re culturally a different species.
I mean, we’re not going to the future without getting changed by it, agreed. but how quickly one has to figure out how to make good use of a big power jump seems like it has a big effect on how much risk the power jump carries for your ability to actually implement the preferences you’d have had if you didn’t rush yourself.