Here’s a simple test which can be used to evaluate the qualifications of all individuals and groups claiming to be qualified to teach rational thinking.
How much have they written or otherwise contributed on the subject of nuclear weapons?
As example, as a thought experiment imagine that I walk around all day with a loaded gun in my mouth, but I typically don’t find the gun interesting enough to discuss. In such a case, would you consider me an authority on rational thinking? In this example, the gun in one person’s mouth represents the massive hydrogen bombs in all of our mouths.
Almost all intellectual elites will fail this test. Once this is seen one’s relationship with intellectual elites can change substantially.
Note: despite the different username, I’m the author of the handbook and a former CFAR staff member.
I disagree with this take as specifically outlined, even though I do think there’s a kernel of truth to it.
Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!
I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.
Phil may disagree with the claim that nuclear weapons are something like third on the list, rather than the top item, but that doesn’t mean he’s right. And CFAR staff certainly clear the bar of “spending a lot of time focusing on what seems to them to be the actually most salient threat.”
I agree that if somebody seems to be willfully ignoring a salient threat, they have gaps in their rationality that should give you pause.
Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!
Can AI destroy modern civilization in the next 30 minutes? Can a single human being unilaterally decide to make that happen, right now, today?
I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population. So if we’re not talking about nukes, which we overwhelmingly are not across the culture at every level of society, it’s not because we don’t understand. It’s because we are in a deep denial similar to how we relate to our own personal mortality. To debunk my own posts, puncturing such deep denial with mere logic is not very promising, but one does what one knows how to do.
I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.
Except that is not the test I proposed. That’s ignoring the most pressing threat to engage a threat that’s more fun to talk about. That said, any discussion of X risk must be applauded, and I do so applaud.
The challenge I’ve presented is not to the EA community in particular, who seem far ahead of other intellectual elites on the subject of X risk generally. I’m really challenging the entire leadership of our society. I tend to focus the challenge mostly at intellectual elites of all types, because that’s who I have the highest expectations of. You know, it’s probably pointless to challenge politicians and the media on such subjects.
Can AI destroy modern civilization in the next 30 minutes?
Doubt it, but it might depend on how much of an overhang we have. My timelines aren’t that short, but if there were an overhang and we were just a few breakthroughs away from recursive self-improvement, would the world look any different than it does now?
Can a single human being unilaterally decide to make that happen, right now, today?
Oh, good point. Pilots have intentionally crashed planes full of passengers. Kids have shot up schools, not expecting to come out alive. Murder-suicide is a thing humans have been known to do. There have been a number of well-documented close calls in the Cold War. As nuclear powers proliferate, MAD becomes more complicated.
It’s still about #3 on my catastrophic risk list depending on how you count things. But the number of humans who could plausibly do this remains relatively small. How many human beings could plausibly bioengineer a pandemic? I think the number is greater, and increasing as biotech advances. Time is not the only factor in risk calculations.
And likely neither of these results in human extinction, but the pandemic scares me more. No, nuclear war wouldn’t do it. That would require salted bombs, which have been theorized, but never deployed. Can’t happen in the next 30 minutes. Fallout become survivable (if unhealthy) in a few days. Nobody is really interested in bombing New Zealand. They’re too far away from everybody else to matter. Nuclear winter risk has been greatly exaggerated, and humans are more omnivorous than you’d think, especially with even simple technology helping to process food sources. Not to say that a nuclear war wouldn’t be catastrophic, but there would be survivors. A lot of them.
A communicable disease that’s too deadly (like SARS-1) tends to burn itself out before spreading much, but an engineered (or natural!) pandemic could plausibly thread the needle and become something at least as bad as smallpox. A highly contagious disease that doesn’t kill outright but causes brain damage or sterility might be similarly devastating to civilization, without being so self-limiting. Even New Zealand might not be safe. A nuclear war ends. A pandemic festers. Outcomes could be worse, and it’s more likely to happen, and becoming more likely to happen. It’s #2 for me.
And #1 is an intelligence explosion. This is not just a catastrophic risk, but an existential one. An unaligned AI destroys all value, by default. It’s not going to have a conscience unless we put one in. Nobody knows how to do that. And short of a collapse of civilization, an AI takeover seems inevitable in short order. We either figure out how to build one that’s aligned before that happens, and it solves all the other solvable risks, or everybody dies.
Here’s a simple test which can be used to evaluate the qualifications of all individuals and groups claiming to be qualified to teach rational thinking.
How much have they written or otherwise contributed on the subject of nuclear weapons?
As example, as a thought experiment imagine that I walk around all day with a loaded gun in my mouth, but I typically don’t find the gun interesting enough to discuss. In such a case, would you consider me an authority on rational thinking? In this example, the gun in one person’s mouth represents the massive hydrogen bombs in all of our mouths.
Almost all intellectual elites will fail this test. Once this is seen one’s relationship with intellectual elites can change substantially.
Note: despite the different username, I’m the author of the handbook and a former CFAR staff member.
I disagree with this take as specifically outlined, even though I do think there’s a kernel of truth to it.
Mainly, I disagree with it because it presupposes that obviously the important thing to talk about is nuclear weapons!
I suspect that Phil is unaware that the vast majority of both CFAR staff and prolific LWers have indeed 100% passed the real version of his test, which is writing and contributing to the subject of existential risk, especially that from artificial intelligence.
Phil may disagree with the claim that nuclear weapons are something like third on the list, rather than the top item, but that doesn’t mean he’s right. And CFAR staff certainly clear the bar of “spending a lot of time focusing on what seems to them to be the actually most salient threat.”
I agree that if somebody seems to be willfully ignoring a salient threat, they have gaps in their rationality that should give you pause.
Hi again Duncan,
Can AI destroy modern civilization in the next 30 minutes? Can a single human being unilaterally decide to make that happen, right now, today?
I feel that nuclear weapons are a very useful tool for analysis because unlike emerging technologies like AI, genetic engineering etc they are very easily understood by almost the entire population. So if we’re not talking about nukes, which we overwhelmingly are not across the culture at every level of society, it’s not because we don’t understand. It’s because we are in a deep denial similar to how we relate to our own personal mortality. To debunk my own posts, puncturing such deep denial with mere logic is not very promising, but one does what one knows how to do.
Except that is not the test I proposed. That’s ignoring the most pressing threat to engage a threat that’s more fun to talk about. That said, any discussion of X risk must be applauded, and I do so applaud.
The challenge I’ve presented is not to the EA community in particular, who seem far ahead of other intellectual elites on the subject of X risk generally. I’m really challenging the entire leadership of our society. I tend to focus the challenge mostly at intellectual elites of all types, because that’s who I have the highest expectations of. You know, it’s probably pointless to challenge politicians and the media on such subjects.
Doubt it, but it might depend on how much of an overhang we have. My timelines aren’t that short, but if there were an overhang and we were just a few breakthroughs away from recursive self-improvement, would the world look any different than it does now?
Oh, good point. Pilots have intentionally crashed planes full of passengers. Kids have shot up schools, not expecting to come out alive. Murder-suicide is a thing humans have been known to do. There have been a number of well-documented close calls in the Cold War. As nuclear powers proliferate, MAD becomes more complicated.
It’s still about #3 on my catastrophic risk list depending on how you count things. But the number of humans who could plausibly do this remains relatively small. How many human beings could plausibly bioengineer a pandemic? I think the number is greater, and increasing as biotech advances. Time is not the only factor in risk calculations.
And likely neither of these results in human extinction, but the pandemic scares me more. No, nuclear war wouldn’t do it. That would require salted bombs, which have been theorized, but never deployed. Can’t happen in the next 30 minutes. Fallout become survivable (if unhealthy) in a few days. Nobody is really interested in bombing New Zealand. They’re too far away from everybody else to matter. Nuclear winter risk has been greatly exaggerated, and humans are more omnivorous than you’d think, especially with even simple technology helping to process food sources. Not to say that a nuclear war wouldn’t be catastrophic, but there would be survivors. A lot of them.
A communicable disease that’s too deadly (like SARS-1) tends to burn itself out before spreading much, but an engineered (or natural!) pandemic could plausibly thread the needle and become something at least as bad as smallpox. A highly contagious disease that doesn’t kill outright but causes brain damage or sterility might be similarly devastating to civilization, without being so self-limiting. Even New Zealand might not be safe. A nuclear war ends. A pandemic festers. Outcomes could be worse, and it’s more likely to happen, and becoming more likely to happen. It’s #2 for me.
And #1 is an intelligence explosion. This is not just a catastrophic risk, but an existential one. An unaligned AI destroys all value, by default. It’s not going to have a conscience unless we put one in. Nobody knows how to do that. And short of a collapse of civilization, an AI takeover seems inevitable in short order. We either figure out how to build one that’s aligned before that happens, and it solves all the other solvable risks, or everybody dies.