Hmmm… I think that might help recruit active participants even if it doesn’t really help get passive supporters.
I emphatically think we should be thinking about, “What is the base rate of validity for claims similar to ours that the average person has likely heard about?” So if we want to preach a gospel of risk and danger, we should take into account just how seriously the average person or policy-maker has learned from experience to take risk and danger, and how many other forms of risk and danger they are trying to take into account at once. As much as FHI, for instance, doesn’t consider global warming an existential risk, I think for the average person the expected value of damage from global warming is a lot higher than the expected value of damage from UFAI or harmful nanotechnology—because their subjective probabilities on any kind of AI or nanotechnology sufficiently powerful to cause harm are extremely low. So our claims get filed under “not nearly as pressing as global warming”.
(I’ve committed this sin myself, and I’m not entirely sure I consider it a sin. The primary reason I think AI risk should be addressed quickly is because it’s comparatively easy to address, and successfully addressing it has a high positive pay-off in and of itself. If we had to choose one of two risks to justify retooling the entire global economy by force of law, I would have to choose global warming.)
Hmmm… I think that might help recruit active participants even if it doesn’t really help get passive supporters.
I emphatically think we should be thinking about, “What is the base rate of validity for claims similar to ours that the average person has likely heard about?” So if we want to preach a gospel of risk and danger, we should take into account just how seriously the average person or policy-maker has learned from experience to take risk and danger, and how many other forms of risk and danger they are trying to take into account at once. As much as FHI, for instance, doesn’t consider global warming an existential risk, I think for the average person the expected value of damage from global warming is a lot higher than the expected value of damage from UFAI or harmful nanotechnology—because their subjective probabilities on any kind of AI or nanotechnology sufficiently powerful to cause harm are extremely low. So our claims get filed under “not nearly as pressing as global warming”.
(I’ve committed this sin myself, and I’m not entirely sure I consider it a sin. The primary reason I think AI risk should be addressed quickly is because it’s comparatively easy to address, and successfully addressing it has a high positive pay-off in and of itself. If we had to choose one of two risks to justify retooling the entire global economy by force of law, I would have to choose global warming.)