Maybe the risk of nuclear war during the Cold War? This 1961 argument by Bertrand Russell is cogent enough to sound correct, it would have probably sounded like a pretty wild prediction to most people, and all considered, it was indeed kinda false, though modest steps along the lines of option 3 did happen (we didn’t get a literal political union but we did get the fall of the Iron Curtain and globalization entangling the world into mutual economic interest, as well as the UN). Anyone betting on the status quo fallacy on a gut instinct against Russell there would have won the bet.
Do you know what the counter-arguments were like? I couldn’t even find any.
Not really, though I expect there must have been some objections. I think digging in the 1950s-60s era and the way they talked about nuclear risk would probably be very instructive. The talk “The AI Dilemma” (look it up on YouTube if you haven’t seen it already) even brings up specifically the airing of the TV movie “The Day After”, and how it concluded with a panel discussion between experts on nuclear risk, including among others Henry Kissinger and Carl Sagan. The huge amount of rightful holy terror of nuclear war back in that day most likely worked, and led to enough people being scared enough of it that in the end we avoided it. Worryingly, it’s now, far from the peak of that scare, that we start seeing “but maybe nuclear war wouldn’t be that destructive” takes (many appeared around the beginning of the war in Ukraine).
Oh, but one weakness is that this example has anthropic shadow. It would be stronger if there was an example where “has a similar argument structure to AI x-risk, but does not involve x-risk”.
So like a strong negative example would be something where we survive if the argument is correct, but the argument turns out false anyways.
That being said, this example is still pretty good. In a world where strong arguments are never wrong, we don’t observe Russell’s argument at all.
Disagree about anthropic shadow, because the argument includes two possible roads to life, barbarism or a world government. If the argument was correct, conditioned upon being still alive and assuming that barbarism would lead to the argument to be lost, an observer still reading Russell’s original text should see a world government with 100% probability. And we don’t.
The idea of a bias only holds if e.g. what Russell considered 100% of all possibilities only actually constituted 80% of the possibilities: then you might say that if we could sample all branches in which an observer looks back at argument, they’d see the argument right with less-than-80% probability, because in a part of the branches in which either of those three options come to pass there are no observers.
But while that is correct, the argument is that those are the only three options. Defined as such, a single counterexample is enough to declare the argument false. No one here is denying that extinction or civilisation collapse from nuclear war have been very real possibilities. But the road we care about here—the possible paths to survival—turned out to be wider than Russell imagined.
Yeah, Russell’s argument is ruled out by the evidence, yes.
The idea of a bias only holds if e.g. what Russell considered 100% of all possibilities only actually constituted 80% of the possibilities
I’m considering the view of a reader of Russell’s argument. If a reader thought “there is a 80% chance that Russell’s argument is correct”, how good of a belief would that be?
Because IRL, Yudkowsky assigns a nearly 100% chance to his doom theory, and I need to come up with the x such that I should believe “Yudkowsky’s doom argument has a x% chance of being correct”.
Maybe the risk of nuclear war during the Cold War? This 1961 argument by Bertrand Russell is cogent enough to sound correct, it would have probably sounded like a pretty wild prediction to most people, and all considered, it was indeed kinda false, though modest steps along the lines of option 3 did happen (we didn’t get a literal political union but we did get the fall of the Iron Curtain and globalization entangling the world into mutual economic interest, as well as the UN). Anyone betting on the status quo fallacy on a gut instinct against Russell there would have won the bet.
Yeah that definitely seems very analogous to the current AI x-risk discourse! I especially like the part where he says the UN won’t work:
Do you know what the counter-arguments were like? I couldn’t even find any.
I would agree that the UN is a sham. I don’t see why you count Russell as being wrong on this point.
Not really, though I expect there must have been some objections. I think digging in the 1950s-60s era and the way they talked about nuclear risk would probably be very instructive. The talk “The AI Dilemma” (look it up on YouTube if you haven’t seen it already) even brings up specifically the airing of the TV movie “The Day After”, and how it concluded with a panel discussion between experts on nuclear risk, including among others Henry Kissinger and Carl Sagan. The huge amount of rightful holy terror of nuclear war back in that day most likely worked, and led to enough people being scared enough of it that in the end we avoided it. Worryingly, it’s now, far from the peak of that scare, that we start seeing “but maybe nuclear war wouldn’t be that destructive” takes (many appeared around the beginning of the war in Ukraine).
Oh, but one weakness is that this example has anthropic shadow. It would be stronger if there was an example where “has a similar argument structure to AI x-risk, but does not involve x-risk”.
So like a strong negative example would be something where we survive if the argument is correct, but the argument turns out false anyways.
That being said, this example is still pretty good. In a world where strong arguments are never wrong, we don’t observe Russell’s argument at all.
Disagree about anthropic shadow, because the argument includes two possible roads to life, barbarism or a world government. If the argument was correct, conditioned upon being still alive and assuming that barbarism would lead to the argument to be lost, an observer still reading Russell’s original text should see a world government with 100% probability. And we don’t.
Oh right, good point. I still think anthropic shadow introduces some bias, but not quite as bad since there was the world government out.
The idea of a bias only holds if e.g. what Russell considered 100% of all possibilities only actually constituted 80% of the possibilities: then you might say that if we could sample all branches in which an observer looks back at argument, they’d see the argument right with less-than-80% probability, because in a part of the branches in which either of those three options come to pass there are no observers.
But while that is correct, the argument is that those are the only three options. Defined as such, a single counterexample is enough to declare the argument false. No one here is denying that extinction or civilisation collapse from nuclear war have been very real possibilities. But the road we care about here—the possible paths to survival—turned out to be wider than Russell imagined.
Yeah, Russell’s argument is ruled out by the evidence, yes.
I’m considering the view of a reader of Russell’s argument. If a reader thought “there is a 80% chance that Russell’s argument is correct”, how good of a belief would that be?
Because IRL, Yudkowsky assigns a nearly 100% chance to his doom theory, and I need to come up with the x such that I should believe “Yudkowsky’s doom argument has a x% chance of being correct”.