Good point, and this appears to be a general issue with human space settlement.
Suppose that the technology to do it cost-effectively arrives before AI e.g. cheap spaceflight arrives this century, and AI doesn’t. Then even if an unfriendly AI shows up centuries later, it can catch-up with and overwhelm the human settlements, no matter how far from Earth they’ve reached. (The AI can survive higher accelerations, reach higher speeds, reproduce more quickly before launching new settlement missions etc.)
Worse, increasing the number of human settlements most likely increases the chance that someone, somewhere builds an unfriendly AI. So, somewhat surprisingly, space settlement could increase existential risk rather than reduce it.
Worse, increasing the number of human settlements most likely increases the chance that someone, somewhere builds an unfriendly AI.
I don’t see this logic. The chance of AI being made is based on how many people are working on it, not how many locations there are where people can work on AI.
More extreme environments may force faster technological developments ( eg WW2), also see the excellent “Destination Void” by Frank Herbert, where a colony ship has to develop an AI to survive the journey to the nearest star, with unexpected results!
I believe the reasoning is [more human settlements] --> [more total humans] --> [more humans to make an AI]. Whether or not settlements on eg Mars will actually lead to more total humans on the timescale we’re talking about is up for debate.
Yes, there is this effect. I was making a general point about what happens if we get space settlements a long time between AI (or to be precise AGI). More lebensraum gives more people.
Also, the number of remote populations increases the chance of someone building an unfriendly AI. A single Earth-bound community has more chance of co-ordinating research on AI safety (which includes slowing down AI deployment until they are confident about safety, and policing rogue effects that just want to get on with building AIs). This seems much harder to achieve if there are lots of groups spread widely apart, with long travel times between them, and no interplanetary cops forcing everyone to behave. Or there may be fierce competition, with everyone wanting to get their own (presumed safe) AIs deployed before nasty ones show up from the other planets.
Good point, and this appears to be a general issue with human space settlement.
Suppose that the technology to do it cost-effectively arrives before AI e.g. cheap spaceflight arrives this century, and AI doesn’t. Then even if an unfriendly AI shows up centuries later, it can catch-up with and overwhelm the human settlements, no matter how far from Earth they’ve reached. (The AI can survive higher accelerations, reach higher speeds, reproduce more quickly before launching new settlement missions etc.)
Worse, increasing the number of human settlements most likely increases the chance that someone, somewhere builds an unfriendly AI. So, somewhat surprisingly, space settlement could increase existential risk rather than reduce it.
I don’t see this logic. The chance of AI being made is based on how many people are working on it, not how many locations there are where people can work on AI.
More extreme environments may force faster technological developments ( eg WW2), also see the excellent “Destination Void” by Frank Herbert, where a colony ship has to develop an AI to survive the journey to the nearest star, with unexpected results!
I believe the reasoning is [more human settlements] --> [more total humans] --> [more humans to make an AI]. Whether or not settlements on eg Mars will actually lead to more total humans on the timescale we’re talking about is up for debate.
Yes, there is this effect. I was making a general point about what happens if we get space settlements a long time between AI (or to be precise AGI). More lebensraum gives more people.
Also, the number of remote populations increases the chance of someone building an unfriendly AI. A single Earth-bound community has more chance of co-ordinating research on AI safety (which includes slowing down AI deployment until they are confident about safety, and policing rogue effects that just want to get on with building AIs). This seems much harder to achieve if there are lots of groups spread widely apart, with long travel times between them, and no interplanetary cops forcing everyone to behave. Or there may be fierce competition, with everyone wanting to get their own (presumed safe) AIs deployed before nasty ones show up from the other planets.