Is principled mass-outreach possible, for AGI X-risk?

Over a year ago, Rohin Shah wrote this, about people trying to slow or stop AGI development through mass public outreach about the dangers of AGI:

But it really doesn’t seem great that my case for wide-scale outreach being good is “maybe if we create a mass delusion of incorrect beliefs that implies that AGI is risky, then we’ll slow down, and the extra years of time will help”. So overall my guess is that this is net negative.

(On my beliefs, which I acknowledge not everyone shares, expecting something better than “mass delusion of incorrect beliefs that implies that AGI is risky” if you do wide-scale outreach now is assuming your way out of reality.)

I agree much more with the second paragraph than the first one.

I think there’s still an angle for that few have tried in a really public way. Namely, ignorance and asymmetry. (There is definitely a better term or two for what I’m about to describe, but I forgot it. Probably from Taleb or one of the SSC posts about people being cautious in seemingly-odd ways due to their boundedness.)

One Idea

A high percentage of voting-eligible people in the US… don’t vote. An even higher percentage vote in only the presidential elections, or only some presidential elections. I’d bet a lot of money that most of these people aren’t working under a Caplan-style non-voting logic, but instead under something like “I’m too busy” or “it doesn’t matter to me /​ either way /​ from just my vote”.

Many of these people, being politically disengaged, would not be well-informed about political issues (or even have strong and/​or coherent values related to those issues). What I want to see is an empirical study that asks these people “are you aware of this?” and “does that awareness, in turn, factor into you not-voting?”.

I think there’s a world, which we might live in, where lots of non-voters believe something akin to “Why should I vote, if I’m clueless about it? Let the others handle this lmao, just like how the nice smart people somewhere make my bills come in.”

In a relevant sense, I think there’s an epistemically-legitimate and persuasive way to communicate “AGI labs are trying to build something smarter than humans, and you don’t have to be an expert (or have much of a gears-level view of what’s going on) to think this is scary. If our smartest experts still disagree on this, and the mistake-asymmetry is ‘unnecessary slowdown VS human extinction’, then it’s perfectly fine to say ‘shut it down until [someone/​some group] figures out what’s going on’”.

To be clear, there’s still a ton of ways to get this wrong, and those who think otherwise are deluding themselves out of reality. I’m claiming that real-human-doable advocacy can get this right, and it’s been mostly left untried.

Extreme Care Still Advised If You Do This

Most persuasion, including digital, is one-to-many “broadcast”-style; “going viral” usually just means “some broadcast happened that nobody heard of”, like an algorithm suggesting a video to a lot of people at once. Given this, plus anchoring bias, you should expect and be very paranoid about the “first thing people hear = sets the conversation” thing. (Think of how many people’s opinions are copypasted from the first classy video essay mass-market John Oliver video they saw about the subject, or the first Fox News commentary on it.)

Not only does the case for X-risk need to be made first, but it needs to be right (even in a restricted way like my above suggestion) the first time. Actually, that’s another reason why my restricted-version suggestion should be prioritized, since it’s more-explicitly robust to small issues.

(If somebody does this in real life, you need to clearly end on something like “Even if a minor detail like [name a specific X] or [name a specific Y] is wrong, it doesn’t change the underlying danger, because the labs are still working towards Earth’s next intelligent species, and there’s nothing remotely strong about the ‘safety’ currently in place.”)

In closing… am I wrong? Can we do this better?

I’m highly interested in better ideas for the goal of mass-outreach-about-AGI-X-risks, whether or not they’re in the vein of my suggestion. I think alignment and EA people are too quick to jump to “mass persuasion will lead to wrong actions, or be too Dark Arts for us, or both”. If it’s true 90% of the time, that other 10% still seems worth aiming for!

(Few people have communications-imagination in general, and I don’t think I personally have that much more of it than others here, but it seems like something that someone could have an unusually high amount of.)

And, of course, I’m (historically) likely to be missing one or more steps of logic that, if I knew it, would change my mind on the feasibility of this project. If you (a media person) want to try any of this, wait a while for contrary comments to come in, and try to interact with them.

This post is mostly copied from my own comment here.

Crossposted to EA Forum (0 points, 0 comments)