He clearly cares about AI going well and has been willing to invest resources in increasing these odds in the past via OpenAI and then Neuralink.
Both of these examples betray an extremely naive understanding of AI risk.
OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone—probably someone in a hurry—getting a decisive strategic advantage.
Neuralink… I just don’t see any scenario where humans have much to contribute to superintelligence, or where “merging” is even a coherent idea, etc. I’m also unenthusiastic on technical grounds.
SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)
So I’d attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.
Moving to another planet does not save you from misaligned superintelligence.
Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.
Neuralink… I just don’t see any scenario where humans have much to contribute to superintelligence, or where “merging” is even a coherent idea
The only way I can see Musk’s position making sense is that it’s actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.
Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.
Let’s use Toby Ord’s categorisation—and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:
nuclear war—avoids accidental/fast escalation; unlikely to help in deliberate war
extreme climate change or environmental damage—avoids this risk entirely
engineered pandemics—strong mitigation
unaligned artificial intelligence—lol nope.
dystopian scenarios—unlikely to help
So Mars colonisation handles about half of these risks, and maybe 1⁄4 of the total magnitude of risks. It’s a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.
I strongly believe that nuclear war and climate change are not existential risks, by a large margin.
For engineered pandemics, I don’t see why Mars would be more helpful than any other isolated pockets on Earth—do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?
Curiously enough, the last scenario you pointed out—dystopias—might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.
It does take substantially longer to get to Mars than to get to any isolated pockets on Earth. So unless the pandemic’s incubation period is longer than the journey to Mars, it’s likely that Martians would know that passengers aboard the ship were infected before it arrived.
The absolute travel time matters less for disease spread in this case. It doesn’t matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won’t spread to those places naturally.
And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they’re obscure almost by definition) and plant the virus there, they’ll most certainly have no trouble bringing it to Mars either.
I had always assumed that any organization trying to destroy the world with an engineered pathogen would basically release whatever they made and then hope it did its work.
IDK, this topic gets into a lot of information hazard, where I don’t really want to speculate because I don’t want to spread ideas for how to make the world a lot worse.
I’d worry that if we’re looking at a potentially civilization-ending pandemic, a would-be warlord with a handful of followers decides that north sentinel island seems a kind of attractive place to go all of a sudden.
AI doesn’t need godhood to affect another planet. Simply scalling up architectures that are equal in intelligence to the smartest humans to work 1 billion times in parallel is enough.
Both of these examples betray an extremely naive understanding of AI risk.
OpenAI was intended to address AI-xrisk by making the superintelligence open source. This is, IMO, not a credible way to avoid someone—probably someone in a hurry—getting a decisive strategic advantage.
Neuralink… I just don’t see any scenario where humans have much to contribute to superintelligence, or where “merging” is even a coherent idea, etc. I’m also unenthusiastic on technical grounds.
SpaceX. Moving to another planet does not save you from misaligned superintelligence. (being told this is, I hear, what led Musk to his involvement in OpenAI)
So I’d attribute it to some combination of too many competing priorities, and simply misunderstanding the problem.
Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.
The only way I can see Musk’s position making sense is that it’s actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.
Let’s use Toby Ord’s categorisation—and ignore natural risks, since the background rate is low. Assuming a self-sustaining civilisation on Mars which could eventually resettle Earth after a disaster:
nuclear war—avoids accidental/fast escalation; unlikely to help in deliberate war
extreme climate change or environmental damage—avoids this risk entirely
engineered pandemics—strong mitigation
unaligned artificial intelligence—lol nope.
dystopian scenarios—unlikely to help
So Mars colonisation handles about half of these risks, and maybe 1⁄4 of the total magnitude of risks. It’s a very expensive mitigation, but IMO still clearly worth doing even solely on X-risk grounds.
I strongly believe that nuclear war and climate change are not existential risks, by a large margin.
For engineered pandemics, I don’t see why Mars would be more helpful than any other isolated pockets on Earth—do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?
Curiously enough, the last scenario you pointed out—dystopias—might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.
It does take substantially longer to get to Mars than to get to any isolated pockets on Earth. So unless the pandemic’s incubation period is longer than the journey to Mars, it’s likely that Martians would know that passengers aboard the ship were infected before it arrived.
The absolute travel time matters less for disease spread in this case. It doesn’t matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won’t spread to those places naturally.
And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they’re obscure almost by definition) and plant the virus there, they’ll most certainly have no trouble bringing it to Mars either.
I had always assumed that any organization trying to destroy the world with an engineered pathogen would basically release whatever they made and then hope it did its work.
IDK, this topic gets into a lot of information hazard, where I don’t really want to speculate because I don’t want to spread ideas for how to make the world a lot worse.
I’d worry that if we’re looking at a potentially civilization-ending pandemic, a would-be warlord with a handful of followers decides that north sentinel island seems a kind of attractive place to go all of a sudden.
Depends how super. FOOM to godhood isn’t the only possible path for AI.
AI doesn’t need godhood to affect another planet. Simply scalling up architectures that are equal in intelligence to the smartest humans to work 1 billion times in parallel is enough.