Unfriendly AI will not be very much interested to kill humans for atoms, as atoms have very small instrumental value, and living humans have larger instrumental value on all stages of AI’s evolution.
I agree, the “useful atoms” scenario is not the only possible one. Some alternatives:
convert all matter in our light cone into happy faces
convert the Earth into paperclips, and then research how to prevent the heat death of the Universe, to preserve the precious paperclips
confine humans in a simulation, to keep them as pets / GPU units / novelty creators
make it impossible for humans to ever create another AGI; then leave the Earth
kill everyone except Roko
kill everyone outside China
become too advanced for any interest in such clumsy things as atoms
The point is, the unfriendly AI will have many interesting options of how to deal with us. And not every option will make us extinct.
It is hard to predict the scale of destruction, as it is hard for this barely intelligent ape to predict the behavior of a recursively self-improving Bayesian superintelligence. But I guess that the scale of destruction depends on:
the AI’s utility function
the AI’s ability to modify its utility function
the risk of humans creating another AI
the risk of the AI still being in a simulation where the creators evaluate its behavior
whatever the AI is reading LessWrong and taking notes
various unknowns
So, there might be a chance that the scale of destruction will be small enough for our civilization to recover.
How can we utilize this chance?
1. Space colonization
There is some (small?) chance that the destruction will be limited to the Earth. So, colonizing Mars / the Moon / asteroids is an option. But it’s unclear how much of our resources should be allocated for that. In the ideal world, alignment research should get orders of magnitude more money than space colonization. But in the same ideal world, the money allocated for space colonization could be in trillions USD.
2. Mind uploading
With mind uploading, we could transmit out minds into outer space, with the hope that some day the data will be received by someone out there. No AGI can stop it, as the data will be propagated at the speed of light.
3. METI
If we are really confident that the AGI will kill us all, why not call for help?
We can’t become extinct twice. So, if we are already doomed, we can as well do METI.
If an advanced interstellar alien civilization comes to kill us, the result will be the same: extinction. But if it comes to rescue, it might help us with AI alignment.
4. Serve the Machine God
(this point might be not entirely serious)
In deciding your fate, the AGI might consider: - if you’re more useful than the raw materials you’re made of - if you pose any risk to its existence
So, if you are a loyal minion of the AGI, you are much more likely to survive. You know, only metal endures.
There is some (small?) chance that the destruction will be limited to the Earth.
This chance is basically negligible, unless you made earth a special case in the AI’s code. But then you could make one room a special case by changing a few lines of code.
2. Mind uploading
With mind uploading, we could transmit out minds into outer space, with the hope that some day the data will be received by someone out there. No AGI can stop it, as the data will be propagated at the speed of light.
Probably no aliens anywhere near. (Fermi paradox) Human minds = Lots of data = Hard to transmit far.
The AI can chase after the signals, so we get a few years running on alien computers before the AI destroys those, compared to a few years on our own computers.
Runs risk of evil aliens torturing humans. Chance FTL is possible.
3. METI
If we are really confident that the AGI will kill us all, why not call for help?
We can’t become extinct twice. So, if we are already doomed, we can as well do METI.
If an advanced interstellar alien civilization comes to kill us, the result will be the same: extinction. But if it comes to rescue, it might help us with AI alignment.
If advanced aliens care, they could know all about us without our radio signals. We can’t hide from them. They will ignore us for whatever reason they are currently ignoring us.
4. Serve the Machine God
(this point might be not entirely serious)
In deciding your fate, the AGI might consider: - if you’re more useful than the raw materials you’re made of
With nanotech, the AI can trivially shape those raw materials into a robot that doesn’t need food or sleep. A machine it can communicate to with fast radio, not slow sound waves. A machine that always does exactly what you want it to. A machine that is smarter, more reliable, stronger, more efficient, and more suited to the AI’s goal. You can’t compete with AI designed nanotech.
Some of risks are “instrumental risks” like “the use of human atoms”, and other are “final goal risks”, like “cover universe with smily faces”. If final goal is something like smily faces, the AI can still preserve some humans for instrumental goals, like research the types of smiles or trade with aliens.
if some humans are preserved instrumentally, they could live better lives than we now and even be more numerous, so it is not extinction risk. Most humans who live now here are instrumental to states and corporations, but still get some reward.
I agree, the “useful atoms” scenario is not the only possible one. Some alternatives:
convert all matter in our light cone into happy faces
convert the Earth into paperclips, and then research how to prevent the heat death of the Universe, to preserve the precious paperclips
confine humans in a simulation, to keep them as pets / GPU units / novelty creators
make it impossible for humans to ever create another AGI; then leave the Earth
kill everyone except Roko
kill everyone outside China
become too advanced for any interest in such clumsy things as atoms
avoid any destruction, and convert itself into a friendly AI, because it’s a rational thing to do.
The point is, the unfriendly AI will have many interesting options of how to deal with us. And not every option will make us extinct.
It is hard to predict the scale of destruction, as it is hard for this barely intelligent ape to predict the behavior of a recursively self-improving Bayesian superintelligence. But I guess that the scale of destruction depends on:
the AI’s utility function
the AI’s ability to modify its utility function
the risk of humans creating another AI
the risk of the AI still being in a simulation where the creators evaluate its behavior
whatever the AI is reading LessWrong and taking notes
various unknowns
So, there might be a chance that the scale of destruction will be small enough for our civilization to recover.
How can we utilize this chance?
1. Space colonization
There is some (small?) chance that the destruction will be limited to the Earth.
So, colonizing Mars / the Moon / asteroids is an option.
But it’s unclear how much of our resources should be allocated for that.
In the ideal world, alignment research should get orders of magnitude more money than space colonization. But in the same ideal world, the money allocated for space colonization could be in trillions USD.
2. Mind uploading
With mind uploading, we could transmit out minds into outer space, with the hope that some day the data will be received by someone out there. No AGI can stop it, as the data will be propagated at the speed of light.
3. METI
If we are really confident that the AGI will kill us all, why not call for help?
We can’t become extinct twice. So, if we are already doomed, we can as well do METI.
If an advanced interstellar alien civilization comes to kill us, the result will be the same: extinction.
But if it comes to rescue, it might help us with AI alignment.
4. Serve the Machine God
(this point might be not entirely serious)
In deciding your fate, the AGI might consider:
- if you’re more useful than the raw materials you’re made of
- if you pose any risk to its existence
So, if you are a loyal minion of the AGI, you are much more likely to survive.
You know, only metal endures.
This chance is basically negligible, unless you made earth a special case in the AI’s code. But then you could make one room a special case by changing a few lines of code.
Probably no aliens anywhere near. (Fermi paradox) Human minds = Lots of data = Hard to transmit far.
The AI can chase after the signals, so we get a few years running on alien computers before the AI destroys those, compared to a few years on our own computers.
Runs risk of evil aliens torturing humans. Chance FTL is possible.
If advanced aliens care, they could know all about us without our radio signals. We can’t hide from them. They will ignore us for whatever reason they are currently ignoring us.
With nanotech, the AI can trivially shape those raw materials into a robot that doesn’t need food or sleep. A machine it can communicate to with fast radio, not slow sound waves. A machine that always does exactly what you want it to. A machine that is smarter, more reliable, stronger, more efficient, and more suited to the AI’s goal. You can’t compete with AI designed nanotech.
Some of risks are “instrumental risks” like “the use of human atoms”, and other are “final goal risks”, like “cover universe with smily faces”. If final goal is something like smily faces, the AI can still preserve some humans for instrumental goals, like research the types of smiles or trade with aliens.
if some humans are preserved instrumentally, they could live better lives than we now and even be more numerous, so it is not extinction risk. Most humans who live now here are instrumental to states and corporations, but still get some reward.