My current estimates lead me to think either AI is hard, the filter is already behind us or there are implementable “mundane” solutions to existential risk. Or technological risk/progress on earth is very unusual (maybe some organisms would never be tempted to use a nuke as they are too “emotionally” connected to the pain of other members of their species).
Consider the following possibilities for how long it will take for humans to develop AI (friendly or otherwise) if we don’t kill ourselves via viruses, nuclear catastrophe etc.
30 years—In this case our chances of creating AI are way too high for this to filter specis. Earth will have had nuclear weapons less than one hundred years by the time it creates AI. And most nuclear scenarios would not actually wipe out all humans imo. If this is the normal technological path then the chance of making it to AI is wayy too high.
100 years—In this case humanity seems unlikely to reach AI imo. Viral threats will become more and more dangerous and unlike nukes they hard to control. Idk if nano risk is serious on this time frame but be. And there are other serious risks. On the other hand I don’t think the odds are so low that this would work as a filter. I am pretty confident humanity has over a one in a thousand chance to make it another 100 years without using uploads/AI. And even a one in a million chance is not enough for a filter.
200+ years—Here the odds of making it to AI are so low imo that the filter would be ahead of us. But 200+ years to AI means that AI is very hard, at least relative to many predictions on this site. This possibility is not ludicrous imo. Scott Aaronson is smart man he endorses it.
Maye there is some method of managing the risks of technological for several hundred years. Stable totalitarianism has been suggested. Another would be a zero privacy world, where anyone could spy on anyone else. And would be able to press an alarm button if they see anyone doing something dangerous (then everyone democratically votes to lynch them?). An even stronger version is if mind reading is genuine possible in real time.
But still either the filter is behind us, AI is hard, or there is some radical solution to handle existential risks.
You think that, without using uploads or AI, humanity has less than a 50% chance of surviving the next hundred years? That seems very surprising to me.
Viral threats are a danger, yes, but while they may cause massive depopulation, even kill off a significant fraction of Earth’s population (especially if genetically engineered viruses are used as a terrorist weapon), they seem unlikely to be able to kill off everyone—especially if people on small islands start shooting down any approaching planes to prevent contamination. Nanotechnological threats may be more all-inclusive, but while that might destroy an entire continent, it would seem unlikely that nanotechnology that can cross an ocean before countermeasures can be developed could be created accidentally
And within thirty years, there’s even a chance of a small colony on Mars—if they can get to the point where they’re growing their own food, rather than having it shipped from Earth, and where they have enough people to sustain their population, then even something that renders Earth uninhabitable would not wipe out all of humanity; and the Sun appears stable enough to keep going for a good few million years still.
So… am I misunderstanding you, or do you see some threat to humanity that I fail to notice?
Consider the following possibilities for how long it will take for humans to develop AI (friendly or otherwise) if we don’t >kill ourselves via viruses, nuclear catastrophe etc.
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
Stable totalitarianism has been suggested.
But does not exist.
Another would be a zero privacy world, where anyone could spy on anyone else. And would be able to press an >alarm button if they see anyone doing something dangerous (then everyone democratically votes to lynch them?).
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
And within thirty years, there’s even a chance of a small colony on Mars
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.
My current estimates lead me to think either AI is hard, the filter is already behind us or there are implementable “mundane” solutions to existential risk. Or technological risk/progress on earth is very unusual (maybe some organisms would never be tempted to use a nuke as they are too “emotionally” connected to the pain of other members of their species).
Consider the following possibilities for how long it will take for humans to develop AI (friendly or otherwise) if we don’t kill ourselves via viruses, nuclear catastrophe etc.
30 years—In this case our chances of creating AI are way too high for this to filter specis. Earth will have had nuclear weapons less than one hundred years by the time it creates AI. And most nuclear scenarios would not actually wipe out all humans imo. If this is the normal technological path then the chance of making it to AI is wayy too high.
100 years—In this case humanity seems unlikely to reach AI imo. Viral threats will become more and more dangerous and unlike nukes they hard to control. Idk if nano risk is serious on this time frame but be. And there are other serious risks. On the other hand I don’t think the odds are so low that this would work as a filter. I am pretty confident humanity has over a one in a thousand chance to make it another 100 years without using uploads/AI. And even a one in a million chance is not enough for a filter.
200+ years—Here the odds of making it to AI are so low imo that the filter would be ahead of us. But 200+ years to AI means that AI is very hard, at least relative to many predictions on this site. This possibility is not ludicrous imo. Scott Aaronson is smart man he endorses it.
Maye there is some method of managing the risks of technological for several hundred years. Stable totalitarianism has been suggested. Another would be a zero privacy world, where anyone could spy on anyone else. And would be able to press an alarm button if they see anyone doing something dangerous (then everyone democratically votes to lynch them?). An even stronger version is if mind reading is genuine possible in real time.
But still either the filter is behind us, AI is hard, or there is some radical solution to handle existential risks.
You think that, without using uploads or AI, humanity has less than a 50% chance of surviving the next hundred years? That seems very surprising to me.
Viral threats are a danger, yes, but while they may cause massive depopulation, even kill off a significant fraction of Earth’s population (especially if genetically engineered viruses are used as a terrorist weapon), they seem unlikely to be able to kill off everyone—especially if people on small islands start shooting down any approaching planes to prevent contamination. Nanotechnological threats may be more all-inclusive, but while that might destroy an entire continent, it would seem unlikely that nanotechnology that can cross an ocean before countermeasures can be developed could be created accidentally
And within thirty years, there’s even a chance of a small colony on Mars—if they can get to the point where they’re growing their own food, rather than having it shipped from Earth, and where they have enough people to sustain their population, then even something that renders Earth uninhabitable would not wipe out all of humanity; and the Sun appears stable enough to keep going for a good few million years still.
So… am I misunderstanding you, or do you see some threat to humanity that I fail to notice?
There are other possibilities. One is simply “never”, other is that AI is much less powerful than current predictions tell, third that interstellar travel is impossible, fourth that AI singletons don’t reproduce and therefore don’t colonize.
But does not exist.
There are lots of problems with this concept. But first, to reduce global risks that requires world goverment, and it almost certainly will stop progress.
Chance to have sustainable colony in foreseeble future (~20 years) are close to zero.
I got two of those three options :-) http://lesswrong.com/lw/kvm/the_great_filter_is_early_or_ai_is_hard/