Probably between 1 and 10% if you put it that way, which is of course insanely high. But that’s not how I’d actually expect it to be framed. If it had come up as an issue, I would have expected it to go something like this:
Leslie Groves: I hear there’s a holdup.
J. Robert Oppenheimer: Yeah, about that. We’ve calculated that there’s a chance the device will initiate a chain reaction in the atmosphere, killing everyone on the planet and melting most of its crust. Consensus among the boys is this falls outside operational requirements.
Groves: How big of a chance?
Oppenheimer: We don’t know exactly. It’s a theoretical issue, and jargon jargon error bars jargon technobabble.
Groves: The Jerries won’t be waiting for us. I need you to hurry this up.
Oppenheimer: We don’t want to proceed with a full-scale test until we’ve ruled this out, but we can work out the theory in parallel. I’ll put that young fellow from the computation group on it.
But that’s not how I’d actually expect it to be framed.
I’m not sure what you mean, or why you say this.
Suppose the young fellow working in parallel comes back and says it’s 0.95% to the best of everyone’s knowledge. You say that you’d expect the government to proceed with the test and overrule any project members who disagreed. And if they protested further, they’d be treated like other dissenters during wartime and would be at least removed from the project.
To put it mildly, I’d rather that governments not accept a 0.95% chance of destroying all life on Earth in return for an advantage in a weapons race.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
I mean that the military administration of the Manhattan Project wasn’t actually equipped to deal with existential risk calculations, the scientific side of the project would have known this, and the administrative side would have known they’d known. It’s effectively a technical obstacle and would have been dealt with as such.
In actuality, that question resolved itself when further investigation showed that it wasn’t going to be a problem. But if the answer had been “no, we’ve done the math and we think it’s too risky”, I think that would have been accepted too (though probably not immediately or without resistance). I don’t think that flat percentages would at any stage have been offered for the interpretation of people not competent to interpret them.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
Um, hypothetically, once the first SIAI is released (Friendly or not) it isn’t going to give the next group a go.
Only the odds on the first one to be released matter, so they can’t multiply to a 90% risk.
With that said, you’re right that it would be a good thing for governments to take existential risks seriously, just like it would be a good thing for pretty much everyone to take them seriously, ya?
Probably between 1 and 10% if you put it that way, which is of course insanely high. But that’s not how I’d actually expect it to be framed. If it had come up as an issue, I would have expected it to go something like this:
Leslie Groves: I hear there’s a holdup.
J. Robert Oppenheimer: Yeah, about that. We’ve calculated that there’s a chance the device will initiate a chain reaction in the atmosphere, killing everyone on the planet and melting most of its crust. Consensus among the boys is this falls outside operational requirements.
Groves: How big of a chance?
Oppenheimer: We don’t know exactly. It’s a theoretical issue, and jargon jargon error bars jargon technobabble.
Groves: The Jerries won’t be waiting for us. I need you to hurry this up.
Oppenheimer: We don’t want to proceed with a full-scale test until we’ve ruled this out, but we can work out the theory in parallel. I’ll put that young fellow from the computation group on it.
Groves: Don’t disappoint me.
I’m not sure what you mean, or why you say this.
Suppose the young fellow working in parallel comes back and says it’s 0.95% to the best of everyone’s knowledge. You say that you’d expect the government to proceed with the test and overrule any project members who disagreed. And if they protested further, they’d be treated like other dissenters during wartime and would be at least removed from the project.
To put it mildly, I’d rather that governments not accept a 0.95% chance of destroying all life on Earth in return for an advantage in a weapons race.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
I mean that the military administration of the Manhattan Project wasn’t actually equipped to deal with existential risk calculations, the scientific side of the project would have known this, and the administrative side would have known they’d known. It’s effectively a technical obstacle and would have been dealt with as such.
In actuality, that question resolved itself when further investigation showed that it wasn’t going to be a problem. But if the answer had been “no, we’ve done the math and we think it’s too risky”, I think that would have been accepted too (though probably not immediately or without resistance). I don’t think that flat percentages would at any stage have been offered for the interpretation of people not competent to interpret them.
Um, hypothetically, once the first SIAI is released (Friendly or not) it isn’t going to give the next group a go.
Only the odds on the first one to be released matter, so they can’t multiply to a 90% risk.
With that said, you’re right that it would be a good thing for governments to take existential risks seriously, just like it would be a good thing for pretty much everyone to take them seriously, ya?