I am Sailor Vulcan, champion of justice and reason! And yes, I know my uniform is considered flashy, unprofessional and borderline sexually provocative for my species by most intelligent lifeforms. I did not choose this outfit. Shut up.
Sailor_Vulcan
Karma: 9
I just read this whole article last night, it really gives a sense of the scope and difficulty of humanity actually surviving. In fact, I wonder how likely anyone is to come up with a way to fix this whole “no fire alarm” mess if they didn’t read through this article. My first thought is that the solution is either to come up with a way to make a fire alarm or something like it that can actually work under these circumstances, or a way to win that doesn’t involve the fire alarm, or to change the circumstances.
It sounds like the amount of rationality necessary for the majority of laypeople to understand all this and respond appropriately is too high to expect them to reach, generally speaking, because this is too many inferential steps away. Of course, maybe if you get a sufficiently large minority of laypeople to have honest thoughtful discussions about AI risk, that will be enough people to make a critical difference.
In fact, just yesterday I spoke to a layperson about AI risk and not only did they understand, but they believed me and agreed with me about how serious it is. And it was easy to get them to listen and understand, and they aren’t someone who I am close with nor someone who has reason to trust me more than anyone else. They were curious and interested and did not seem to be aware of any AI risks besides unemployment, prior to the conversation.
You want to reach people who have not already invested themselves in the subject and their preconceptions, I suspect. Part of the problem here is likely that too many people who are talking about AI are already invested in their opinions. A layperson who isn’t completely batshit crazy might have trouble changing their minds and have crazy beliefs that they refuse to scrutinize in regards to a lot of other subjects, but if they haven’t yet made up their minds about AI and you approach them about it right, using good communication skills and relating to their feelings, you could still get them to come to the right conclusions the first time around. You need to make the subject be interesting to them and the conversation engaging and do it in a way that doesn’t make them feel helpless and hopeless, nor puts them on the defensive, but still explains things accurately.
If beforehand they actually have reason to care besides wanting to save the world (because that will just make them feel overwhelmed with crushing responsibility) then things might go a bit better maybe? If only really brave people are able to respond appropriately to AI risk, then we either need to improve people’s bravery, or make it so that they don’t need to be so brave to respond appropriately, or make it so that the people who are already being brave are able to make more of an impact.