SL0 people think “hacker” refers to a special type of dangerous criminal and don’t know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the “x-risks community” are above those levels—asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.
I thought this article was for SL0 people—that would give it the widest audience possible, which I thought was the point?
If it’s aimed at the SL0′s, then we’d be wanting to go for an SL1 image.
SL0 people think “hacker” refers to a special type of dangerous criminal and don’t know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0′s—maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the “x-risks community” are above those levels—asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.