For those who don’t want to, the gist is: Given the same level of specificity, people will naturally give more credit to the public thinker that argues that society or industry will change, because it’s easy to recall active examples of things changing and hard to recall the vast amount of negative examples where things stayed the same. If you take the Nassim Taleb route of vapidly predicting, in an unspecific way, that interesting things are eventually going to happen, interesting things will eventually happen and you will be revered as an oracle. If you take the Francis Fukuyama route of vapidly saying that things will mostly stay the same, you will be declared a fool every time something mildly important happens.
The computer security industry happens to know this dynamic very well. No one notices the Fortune 500 company that doesn’t suffer the ransomware attack. Outside the industry, this active vs. negative bias is so prevalent that information security standards are constantly derided as “horrific” without articulating the sense in which they fail, and despite the fact that online banking works pretty well virtually all of the time. Inside the industry, vague and unverified predictions that Companies Will Have Security Incidents, or that New Tools Will Have Security Flaws, are treated much more favorably in retrospect than vague and unverified predictions that companies will mostly do fine. Even if you’re right that an attack vector is unimportant and probably won’t lead to any real world consequences, in retrospect your position will be considered obvious. On the other hand, if you say that an attack vector is important, and you’re wrong, people will also forget about that in three years. So better list everything that could possibly go wrong[1], even if certain mishaps are much more likely than others, and collect oracle points when half of your failure scenarios are proven correct.
This would be bad on its own, but then it’s compounded with several other problems. For one thing, predictions of doom, of course, inflate the importance and future salary expectations of information security researchers[2], in the same sense that inflating the competence of the Russian military is good for the U.S. defense industry. When you tell someone their Rowhammer hardware attacks are completely inexploitable in practice, that’s no fun for anyone, because it means infosec researchers aren’t going to all get paid buckets of money to defend against Rowhammer exploits, and journalists have no news article. For another thing, the security industry (especially the offensive side) is selected to contain people who believe computer security is a large societal problem, and that they themselves can get involved, or at least want to believe that it’s possible for them to get involved if they put in a lot of time and effort, and so security researchers are already inclined to hear you if you’re about to tell them how obviously bad information security at most companies really is.
In retrospect, a value add of the post is precisely in raising this consideration, where incentives can make a huge difference in what you believe in, and a big takeaway is that I’m way less of a fan of security mindset as practiced by Eliezer, at least without massive scope changes, and is a reason in why I treat arguments for AI doom that aren’t backed up by an empirical story suspiciously automatically.
In retrospect, a value add of the post is precisely in raising this consideration, where incentives can make a huge difference in what you believe in, and a big takeaway is that I’m way less of a fan of security mindset as practiced by Eliezer, at least without massive scope changes, and is a reason in why I treat arguments for AI doom that aren’t backed up by an empirical story suspiciously automatically.