Ideally Yudkowsky would have linked to the arguments he is commenting on. This would demonstrate that he is responding to real, prominent, serious arguments, and that he is not distorting those arguments. It would also have saved me some time.
But now imagine if—like this Spokesperson here—the AI-allowers cried ‘Empiricism!‘, to try to convince you to do the blindly naive extrapolation from the raw data of ‘Has it destroyed the world yet?’
An argument I have seen is blindly naive extrapolation from the raw data of ‘Has tech destroyed the world yet?’ Eg, The Techno-Optimist Manifesto implies this argument. My current best read of the quoted text above is that it’s an attack on an exaggerated and simplified version of this type of view. In other words, a straw man.
Ideally Yudkowsky would have linked to the arguments he is commenting on. This would demonstrate that he is responding to real, prominent, serious arguments, and that he is not distorting those arguments. It would also have saved me some time.
The first hit I got searching for “AI risk empiricism” was Ignore the Doomers: Why AI marks a resurgence of empiricism. The second hit was AI Doom and David Hume: A Defence of Empiricism in AI Safety, which linked Anthropic’s Core Views on AI Safety. These are hardly analogous to the Spokesman’s claims of 100% risk-free returns.
Next I sampled several Don’t Worry about the Vase AI newsletters and “some people are not so worried”. I didn’t really see any cases of blindly naive extrapolation from the raw data of ‘Has AI destroyed the world yet?’. I found Alex Tabarrok saying “I want to see that the AI baby is dangerous before we strangle it in the crib.”. I found Jacob Buckman saying “I’m Not Worried About An AI Apocalypse”. These things are related but clearly admit the possibility of danger and are arguing for waiting to see evidence of danger before acting.
An argument I have seen is blindly naive extrapolation from the raw data of ‘Has tech destroyed the world yet?’ Eg, The Techno-Optimist Manifesto implies this argument. My current best read of the quoted text above is that it’s an attack on an exaggerated and simplified version of this type of view. In other words, a straw man.