I don’t think “it didn’t even come close” is sufficient to say that 5% was too paranoid.
I know the principles behind an atomic bomb, but I don’t know how much U-238 you need for critical mass. If someone takes two fist-sized lumps of U-238 and proposes to smash them together, I’d give… probably ~50% chance of it causing a massive explosion. But I’d also give maybe about 10% probability that you need like ten times as much U-238 as that. If that happens to be the case, I still don’t think that 50% is too paranoid, given my current state of knowledge.
There are people who do know how much U-238 you need, and their probability estimate will presumably be close to 0 or close to 1. And today, we can presumably work through the math and point out what the limits of Eurisko are that stop it from FOOMing. But if we hadn’t done the math at the time, 5% isn’t obviously unreasonable.
Tangential, but: U-238 is fissionable but not fissile; no amount of U-238 will give you a massive explosion if you bang it together. It’s U-235 that’s the fissile isotope.
(Even banging that together by hand won’t give you a massive explosion, though it will give you a moderately large explosion and an extremely lethal dose of radiation: the jargon is “predetonation” or “fizzle”. You need to bring a critical mass into existence hard and fast, e.g. by imploding a hollow sphere with explosive lenses, or a partial reaction will blow the pieces apart before criticality really has a chance to get going.)
I don’t think I can prove that I’m not coming at it from a hindsight biased perspective.
But I think I can say confidently that today’s technology is at least a qualitative leap away from Strong let alone FOOM AI. To make that more clear, I think no currently existing academic, industrial or personal project will achieve Strong AI or FOOM. Concretely:
In the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%
So that’s a 2 year period where I estimate the chance of Strong AI or FOOM as substantially less than EY is saying we should have estimated Eurisko’s risk of FOOM only in retrospect.
I don’t think “it didn’t even come close” is sufficient to say that 5% was too paranoid.
I know the principles behind an atomic bomb, but I don’t know how much U-238 you need for critical mass. If someone takes two fist-sized lumps of U-238 and proposes to smash them together, I’d give… probably ~50% chance of it causing a massive explosion. But I’d also give maybe about 10% probability that you need like ten times as much U-238 as that. If that happens to be the case, I still don’t think that 50% is too paranoid, given my current state of knowledge.
There are people who do know how much U-238 you need, and their probability estimate will presumably be close to 0 or close to 1. And today, we can presumably work through the math and point out what the limits of Eurisko are that stop it from FOOMing. But if we hadn’t done the math at the time, 5% isn’t obviously unreasonable.
Tangential, but: U-238 is fissionable but not fissile; no amount of U-238 will give you a massive explosion if you bang it together. It’s U-235 that’s the fissile isotope.
(Even banging that together by hand won’t give you a massive explosion, though it will give you a moderately large explosion and an extremely lethal dose of radiation: the jargon is “predetonation” or “fizzle”. You need to bring a critical mass into existence hard and fast, e.g. by imploding a hollow sphere with explosive lenses, or a partial reaction will blow the pieces apart before criticality really has a chance to get going.)
I don’t think I can prove that I’m not coming at it from a hindsight biased perspective.
But I think I can say confidently that today’s technology is at least a qualitative leap away from Strong let alone FOOM AI. To make that more clear, I think no currently existing academic, industrial or personal project will achieve Strong AI or FOOM. Concretely:
In the next 2 years the chance of Strong AI and/or FOOM AI being developed is no more than 0.2%
So that’s a 2 year period where I estimate the chance of Strong AI or FOOM as substantially less than EY is saying we should have estimated Eurisko’s risk of FOOM only in retrospect.