I’m not completely opposed to public outreach. I think there should be some attempts to address misconceptions (ie. such that it is like Terminator; or at least what people remember/infer about Terminator).
I mean, what would be the actual downsides of a literal mob showing up at DeepMind’s headquarters holding “please align AI” giant banners?
I haven’t really thought that through. It might be worth talking to people and see what they say.
I’m pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the proper arguments, the conversation and people’s beliefs inevitably trend toward simple & meme-y/emotionally riveting ideas, instead of the accurate ones. (Looking at the popular discourse on climate change is another good illustration of this.)
But in this case, maybe even if people latch onto misguided fears about Terminator or whatever, as long as they have some sort of intense fear of AI, it can still produce the intended actions. To be clear I’m still very unsure whether such a campaign is a good idea at this point, just a thought.
I think reaching out to governments is a more direct lever: civilians don’t have the power to shut down AI themselves (unless mobs literally burn down all the AGI offices), the goal with public messaging would be to convince them to pressure the leadership to ban it right? Why not cut out the middleman and make the leaders see the dire danger directly?
Holding please align AI signs in front of DeepMind’s headquarters is an idea. Attempting to persuade “the general public” is a bad idea. “The general public” will react too slowly and without any competence. We need to target people actually doing the burning.
Well, the mass public advocacy in the strict sense may not change the public opinion in a short time, but I’m still willing to give it a try.
I mean, what would be the actual downsides of a literal mob showing up at DeepMind’s headquarters holding “please align AI” giant banners?
(EDIT: maybe “mob” is not the right word, I’m not advocating for angry mobs burning down the AI labs… “crowd” would have been better).
I’m not completely opposed to public outreach. I think there should be some attempts to address misconceptions (ie. such that it is like Terminator; or at least what people remember/infer about Terminator).
I haven’t really thought that through. It might be worth talking to people and see what they say.
I’m pretty opposed to public outreach to get support for alignment, but the alternative goal of whipping up enough hysteria to destroy the field of AI/the AGI development groups killing us seems much more doable. Reason being from my lifelong experience observing public discourse on topics I have expert knowledge on (e.g. nuclear weapons, China), it seems completely impossible to implant the exact right ideas into the public mind, especially for a complex subject. Once you attract attention to a topic, no matter how much effort you put into presenting the proper arguments, the conversation and people’s beliefs inevitably trend toward simple & meme-y/emotionally riveting ideas, instead of the accurate ones. (Looking at the popular discourse on climate change is another good illustration of this.)
But in this case, maybe even if people latch onto misguided fears about Terminator or whatever, as long as they have some sort of intense fear of AI, it can still produce the intended actions. To be clear I’m still very unsure whether such a campaign is a good idea at this point, just a thought.
I think reaching out to governments is a more direct lever: civilians don’t have the power to shut down AI themselves (unless mobs literally burn down all the AGI offices), the goal with public messaging would be to convince them to pressure the leadership to ban it right? Why not cut out the middleman and make the leaders see the dire danger directly?
Holding please align AI signs in front of DeepMind’s headquarters is an idea. Attempting to persuade “the general public” is a bad idea. “The general public” will react too slowly and without any competence. We need to target people actually doing the burning.