If you were an evil genius with, say, $1B of computing power, what is the most harm you could possibly do to society?
AI risk is existential and currently theoretical. Learning what a malicious actor could do with $1B of compute today will not help focus your thinking on the risks posed by AGI. It’s like trying to focus your thinking on the risks of global nuclear war by asking, “What’s the worst a terrorist could do with a few tons of TNT?” It’s not that the scale is wrong, it’s that the risks are completely different. That doesn’t mean that a terrorist with 100,000 tons of TNT isn’t an important problem, but it’s not the problem that Thomas Schelling and the nuclear deterrence experts were working on during the cold war.
What is the most evil AI that could be built, today?
This is an entirely different question and the answer is there isn’t any public evidence that anybody has the ability to create an evil AI today. I don’t want to belabor the point, but nobody knew how to split an atom in 1937, yet in 1945 the US dropped two atomic bombs on hundreds of thousands of Japanese civilians.
AI risk is existential and currently theoretical. Learning what a malicious actor could do with $1B of compute today will not help focus your thinking on the risks posed by AGI. It’s like trying to focus your thinking on the risks of global nuclear war by asking, “What’s the worst a terrorist could do with a few tons of TNT?” It’s not that the scale is wrong, it’s that the risks are completely different. That doesn’t mean that a terrorist with 100,000 tons of TNT isn’t an important problem, but it’s not the problem that Thomas Schelling and the nuclear deterrence experts were working on during the cold war.
This is an entirely different question and the answer is there isn’t any public evidence that anybody has the ability to create an evil AI today. I don’t want to belabor the point, but nobody knew how to split an atom in 1937, yet in 1945 the US dropped two atomic bombs on hundreds of thousands of Japanese civilians.