I edited it out but I don’t see why dying off is inevitable, as our extinction isn’t directly a convergent instrumental sub goal. I think a lot of bastardized forms of goal maximization don’t involve dead humans, although clearly most involve disempowered humans.
As I’ve argued here, it seems very likely that a superintelligent AI with a random goal will turn earth and most of the rest of the universe into computronium, because increasing its intelligence is the dominant instrumental subgoal for whatever goal it has. This would mean inadvertent extinction of humanity and (almost) all biological life. One of the reasons for this is the potential threat of grabby aliens/a grabby alien superintelligence.
However, this is a hypothesis which we didn’t thoroughly discuss during the AI Safety Project, so we didn’t feel confident enough to include it in the story. Instead we just hinted at it and included the link to the post.
I have a lot of issues with the disassembling atoms line of thought, but I won’t argue it here. I think it’s been argued enough against in popular posts.
But I think the gist of it is the Earth is a tiny fraction of the solar system/near solar systems’ resources (even smaller out of the light cone), and one of the worst places to host a computer vs say Pluto, because of heat, so ultimately it doesn’t take much to avoid using Earth for all of its resources.
Grabby aliens don’t really limit us from using solar system/near solar system.
And some of my own thoughts: the speed of light probably limits how useful that large of computers are (say planet size), while a legion of AI systems is probably slow to coordinate. They will still be very powerful but a planet sized computer just doesn’t sound realistic in the literal sense. A planet sized compute cluster? Sure, maybe heat makes that impractical, but sure.
I edited it out but I don’t see why dying off is inevitable, as our extinction isn’t directly a convergent instrumental sub goal. I think a lot of bastardized forms of goal maximization don’t involve dead humans, although clearly most involve disempowered humans.
As I’ve argued here, it seems very likely that a superintelligent AI with a random goal will turn earth and most of the rest of the universe into computronium, because increasing its intelligence is the dominant instrumental subgoal for whatever goal it has. This would mean inadvertent extinction of humanity and (almost) all biological life. One of the reasons for this is the potential threat of grabby aliens/a grabby alien superintelligence.
However, this is a hypothesis which we didn’t thoroughly discuss during the AI Safety Project, so we didn’t feel confident enough to include it in the story. Instead we just hinted at it and included the link to the post.
I have a lot of issues with the disassembling atoms line of thought, but I won’t argue it here. I think it’s been argued enough against in popular posts.
But I think the gist of it is the Earth is a tiny fraction of the solar system/near solar systems’ resources (even smaller out of the light cone), and one of the worst places to host a computer vs say Pluto, because of heat, so ultimately it doesn’t take much to avoid using Earth for all of its resources.
Grabby aliens don’t really limit us from using solar system/near solar system.
And some of my own thoughts: the speed of light probably limits how useful that large of computers are (say planet size), while a legion of AI systems is probably slow to coordinate. They will still be very powerful but a planet sized computer just doesn’t sound realistic in the literal sense. A planet sized compute cluster? Sure, maybe heat makes that impractical, but sure.