I have a lot of uncertainty about the difficulty of robotics, and the difficulty of e.g. designing superviruses or other ways to kill a lot of people. I do agree that in most worlds robotics will be solved to a human level before AI will be capable of killing everyone, but I am generally really averse to unnecessarily constraining my hypothesis space when thinking about this kind of stuff.
>90% seems quite doable with a well-engineered virus (especially one with a long infectious incubation period). I think 99%+ is much harder and probably out of reach until after robotics is thoroughly solved, but like, my current guess is a motivated team of humans could design a virus that kills 90% − 95% of humanity.
Can a motivated team of humans design a virus that spreads rapidly but stays dormant for a while until it kills most humans with a difficult to stop mechanism before we can stop it? And it has to happen before we develop AIs that can detect these sorts of latent threats anyways.
You have to realize if covid was like this we would mass trial mrna vaccines as soon as they were available and a lot of Hail Mary procedures since the alternative is extinction.
These slightly smarter than human AIs will be monitored by other such AIs, and probably will be rewarded if they defect. (The AIs they defect on get wiped out and they possibly get to replicate more for example)
I think such a takeover could be quite difficult to pull off in practice. The world with lots of slightly smarter than human AIs will be more robust to takeover, there’s a limited time window to even attempt it, failure would be death, and humanity would be far more disciplined against this than covid.
Despite my general interest in open inquiry, I will avoid talking about my detailed hypothesis of how to construct such a virus. I am not confident this is worth the tradeoff, but the costs of speculating about the details here in public do seem non-trivial.
I have a lot of uncertainty about the difficulty of robotics, and the difficulty of e.g. designing superviruses or other ways to kill a lot of people. I do agree that in most worlds robotics will be solved to a human level before AI will be capable of killing everyone, but I am generally really averse to unnecessarily constraining my hypothesis space when thinking about this kind of stuff.
>90% seems quite doable with a well-engineered virus (especially one with a long infectious incubation period). I think 99%+ is much harder and probably out of reach until after robotics is thoroughly solved, but like, my current guess is a motivated team of humans could design a virus that kills 90% − 95% of humanity.
Can a motivated team of humans design a virus that spreads rapidly but stays dormant for a while until it kills most humans with a difficult to stop mechanism before we can stop it? And it has to happen before we develop AIs that can detect these sorts of latent threats anyways.
You have to realize if covid was like this we would mass trial mrna vaccines as soon as they were available and a lot of Hail Mary procedures since the alternative is extinction.
These slightly smarter than human AIs will be monitored by other such AIs, and probably will be rewarded if they defect. (The AIs they defect on get wiped out and they possibly get to replicate more for example)
I think such a takeover could be quite difficult to pull off in practice. The world with lots of slightly smarter than human AIs will be more robust to takeover, there’s a limited time window to even attempt it, failure would be death, and humanity would be far more disciplined against this than covid.
Despite my general interest in open inquiry, I will avoid talking about my detailed hypothesis of how to construct such a virus. I am not confident this is worth the tradeoff, but the costs of speculating about the details here in public do seem non-trivial.