I assumed that humans would at least die off, if not be actively exterminated. Still need to know how and what happens after that. That’s not 100 percent a joke.
To a certain extent it doesn’t matter. Or rather it’s a question of expected utility. If 10% of outcomes are amazing, but 60% horrible, that sort of suggests you might want to avoid that route.
I edited it out but I don’t see why dying off is inevitable, as our extinction isn’t directly a convergent instrumental sub goal. I think a lot of bastardized forms of goal maximization don’t involve dead humans, although clearly most involve disempowered humans.
As I’ve argued here, it seems very likely that a superintelligent AI with a random goal will turn earth and most of the rest of the universe into computronium, because increasing its intelligence is the dominant instrumental subgoal for whatever goal it has. This would mean inadvertent extinction of humanity and (almost) all biological life. One of the reasons for this is the potential threat of grabby aliens/a grabby alien superintelligence.
However, this is a hypothesis which we didn’t thoroughly discuss during the AI Safety Project, so we didn’t feel confident enough to include it in the story. Instead we just hinted at it and included the link to the post.
I have a lot of issues with the disassembling atoms line of thought, but I won’t argue it here. I think it’s been argued enough against in popular posts.
But I think the gist of it is the Earth is a tiny fraction of the solar system/near solar systems’ resources (even smaller out of the light cone), and one of the worst places to host a computer vs say Pluto, because of heat, so ultimately it doesn’t take much to avoid using Earth for all of its resources.
Grabby aliens don’t really limit us from using solar system/near solar system.
And some of my own thoughts: the speed of light probably limits how useful that large of computers are (say planet size), while a legion of AI systems is probably slow to coordinate. They will still be very powerful but a planet sized computer just doesn’t sound realistic in the literal sense. A planet sized compute cluster? Sure, maybe heat makes that impractical, but sure.
I think the key is it is anti corrigible and power seeking. It’s not entirely clear why we die off as a part of its goals however.
I assumed that humans would at least die off, if not be actively exterminated. Still need to know how and what happens after that. That’s not 100 percent a joke.
What’s a “CIS”?
To a certain extent it doesn’t matter. Or rather it’s a question of expected utility. If 10% of outcomes are amazing, but 60% horrible, that sort of suggests you might want to avoid that route.
I edited it out but I don’t see why dying off is inevitable, as our extinction isn’t directly a convergent instrumental sub goal. I think a lot of bastardized forms of goal maximization don’t involve dead humans, although clearly most involve disempowered humans.
As I’ve argued here, it seems very likely that a superintelligent AI with a random goal will turn earth and most of the rest of the universe into computronium, because increasing its intelligence is the dominant instrumental subgoal for whatever goal it has. This would mean inadvertent extinction of humanity and (almost) all biological life. One of the reasons for this is the potential threat of grabby aliens/a grabby alien superintelligence.
However, this is a hypothesis which we didn’t thoroughly discuss during the AI Safety Project, so we didn’t feel confident enough to include it in the story. Instead we just hinted at it and included the link to the post.
I have a lot of issues with the disassembling atoms line of thought, but I won’t argue it here. I think it’s been argued enough against in popular posts.
But I think the gist of it is the Earth is a tiny fraction of the solar system/near solar systems’ resources (even smaller out of the light cone), and one of the worst places to host a computer vs say Pluto, because of heat, so ultimately it doesn’t take much to avoid using Earth for all of its resources.
Grabby aliens don’t really limit us from using solar system/near solar system.
And some of my own thoughts: the speed of light probably limits how useful that large of computers are (say planet size), while a legion of AI systems is probably slow to coordinate. They will still be very powerful but a planet sized computer just doesn’t sound realistic in the literal sense. A planet sized compute cluster? Sure, maybe heat makes that impractical, but sure.