But that’s not how I’d actually expect it to be framed.
I’m not sure what you mean, or why you say this.
Suppose the young fellow working in parallel comes back and says it’s 0.95% to the best of everyone’s knowledge. You say that you’d expect the government to proceed with the test and overrule any project members who disagreed. And if they protested further, they’d be treated like other dissenters during wartime and would be at least removed from the project.
To put it mildly, I’d rather that governments not accept a 0.95% chance of destroying all life on Earth in return for an advantage in a weapons race.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
I mean that the military administration of the Manhattan Project wasn’t actually equipped to deal with existential risk calculations, the scientific side of the project would have known this, and the administrative side would have known they’d known. It’s effectively a technical obstacle and would have been dealt with as such.
In actuality, that question resolved itself when further investigation showed that it wasn’t going to be a problem. But if the answer had been “no, we’ve done the math and we think it’s too risky”, I think that would have been accepted too (though probably not immediately or without resistance). I don’t think that flat percentages would at any stage have been offered for the interpretation of people not competent to interpret them.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
Um, hypothetically, once the first SIAI is released (Friendly or not) it isn’t going to give the next group a go.
Only the odds on the first one to be released matter, so they can’t multiply to a 90% risk.
With that said, you’re right that it would be a good thing for governments to take existential risks seriously, just like it would be a good thing for pretty much everyone to take them seriously, ya?
I’m not sure what you mean, or why you say this.
Suppose the young fellow working in parallel comes back and says it’s 0.95% to the best of everyone’s knowledge. You say that you’d expect the government to proceed with the test and overrule any project members who disagreed. And if they protested further, they’d be treated like other dissenters during wartime and would be at least removed from the project.
To put it mildly, I’d rather that governments not accept a 0.95% chance of destroying all life on Earth in return for an advantage in a weapons race.
You estimate the government might press ahead even with 9% probability of extinction. If every competing government takes on a different risk of this magnitude—perhaps a risk of their own personal failure that is really independent of competitors, as with the risk of releasing an AI that turns out to be Unfriendly—then with 10 such projects we have 90% total probability of the extinction of all life.
I mean that the military administration of the Manhattan Project wasn’t actually equipped to deal with existential risk calculations, the scientific side of the project would have known this, and the administrative side would have known they’d known. It’s effectively a technical obstacle and would have been dealt with as such.
In actuality, that question resolved itself when further investigation showed that it wasn’t going to be a problem. But if the answer had been “no, we’ve done the math and we think it’s too risky”, I think that would have been accepted too (though probably not immediately or without resistance). I don’t think that flat percentages would at any stage have been offered for the interpretation of people not competent to interpret them.
Um, hypothetically, once the first SIAI is released (Friendly or not) it isn’t going to give the next group a go.
Only the odds on the first one to be released matter, so they can’t multiply to a 90% risk.
With that said, you’re right that it would be a good thing for governments to take existential risks seriously, just like it would be a good thing for pretty much everyone to take them seriously, ya?