Directly advertising existential risk
Has anyone tried advertising existential risk?
Bostroms “End of Humanity” talk for instance.
It costs about 0.2 $ per view for a video ad on YouTube, so if 0.2% of viewers give an average of 100 $ it would break even. Hopefully people would give more than that.
You can target ads to groups likely to give much by the way, like the highly educated.
I posted this suggestion in the open thread as well, before I had the karma to make a thread. That okay?
Beware of figures plucked from the air just because they “sound” small enough or big enough to do the work required of them. It is quite possible to run an ad that goes out to a million people and gets no responses.
What response do video ads on YouTube get, in terms not just of clicks but of whatever action the ad is intended to elicit?
There’s http://www.adweek.com/socialtimes/youtube-ads-highest-conversion-rates/204145, and https://www.thinkwithgoogle.com/articles/driving-donations-digitally.html
That last one is from the owner of Youtube, so take whatever grains of salt you think are appropriate.
I think that would vary too much depending on the video to make a meaningful comparison. Better to compare 0.2$ to the oppurtunity costs of word of mouth and other methods of spreading existential risk awareness, isn’t it?
First, it is fantastic to see people kicking around low-cost ideas that are potentially high-impact. If a hundred bucks could buy a meaningful change, it might be a bargain even by Effective Altruism standards. However, we should be extremely cautious to investigating such ideas thoroughly. I am extremely glad that you shared this with us before acting on it, because I have been thinking a lot about the mechanisms behind movement growth, and I think they are particularly relevant to such ideas.
It seems to me that we do NOT need movement publicity. What we need is movement advocacy. There’s an important distinction, which is explained in depth by a very interesting paper by the Center for Effective Altruism.
What it boils down to is that there are two orthogonal concepts: people’s awareness of any given cause, and their inclination toward or against it. For the sorts of things that are simple to explain, where people don’t need any convincing, a cause can maximize its growth simply by making more people aware of it. If, however, your cause sounds a little crazy or isn’t immediately obvious that it is a real problem and not hype, scam, cult, etc, then you need much more than a news sound-bite to convince anyone. In fact, a little shallow knowledge can actually hurt the cause, since the new members of the cause will be incredibly naive and misinformed, and will likely turn many, many people off of the cause as a whole. (“oh, it’s those nutcases.) To make things worse, every uninformed supporter is a huge liability to the cause as a whole for much the same reason.
Imagine if the scientific community thought global warming had less than a 50% chance of being real, rather than just random fluctuations in the data. They’d have to have a nuanced argument that playing Russian roulette with the planet isn’t a good risk, even if the odds are “good” (5/6 empty chambers). Then they’d still have to argue that we can and should do something about it, and that the interventions would be cost effective and successful.
X-risk has it even worse. The arguments are extremely nuanced, highly technical, and aren’t definitive. (Fermi paradox, survivorship bias, anthropic reasoning, observation selection effects, and all sorts of techniques for probability assessment, to say nothing of all the specific possible forms of x-risk and all their technicalities) I think it best to concentrate on upping the number of technical papers published on X-risk, so that when we can’t keep it out of the tabloids any longer at least there will be a highly knowledgeable core community that won’t be drowned out in the media by the under informed anecdotes of “experts”. Our main (maybe only) aim at the moment should be toward academia. That might be helped by having academic X-risk conferences available on YouTube, but it would be best if academics find out about them from colleagues. A targeted advertising campaign focused on academics may be somewhat valuable, but it may also hurt our credibility. Since most people, even academics, aren’t likely to research the area thoroughly after reading an add or even watching a video, it seems quite plausible that adds could do more harm than good. I’m not saying your idea is good or bad just yet, but simply advocating a bit more discussion before committing to it. Posting here is a good way of obtaining exactly that sort of discussion, so I’ve upvoted you for visibility. :)
That said, if we do want to introduce non-academics to X-risk, some methods are better than others. Personally, I find it difficult not to talk about my interests/obsessions with friends. I agree with you that Nick Bostrom’s TED talks are a good start, although I wouldn’t advocate anything shorter. The Wait But Why AI article is also fantastic. There’s always a large risk that someone will read a title/headline only, and come away with a negative first impression that will be hard to correct in the future.
If you are looking for ideas of how to further X-risk awareness, I’m working on a couple ideas, but I’d prefer not to share until I’m sure they are good ideas. If I suggest a bad idea that sounds good on the surface, it would be quite hard to stop the meme after planting it, and many well-meaning people could potentially cause harm. Although there is a lot of value in informal discussions and brainstorming, it is also easier to do damage to a cause than to further it.
You seem like a very down to earth guy, MarsColony_in10years :)
I’m not sure X-risk needs to be complicated though. The basics is just “Future technology may be dangerous and needs to be studied more”. That should be enough to support the cause. One doesn’t need to, and I don’t think Bostrom does, go into the complicated things you mentioned.
The part in Bostrom’s video where he talks about future people colonizing the galaxy and uploading themselves into computers and reach a post human condition should probably be cut for mainstream viewers, and maybe the expected utility calculations, other than that I don’t see what could turn people off?
That’s a reasonably safe statement, but I can still see it misconstrued as
“Technology is bad.” (sounds vaguely like a particular flavor of liberal flag waving, so some types of conservatives may react with pro-economic growth flag-waving)
“The end is nigh!” (sounds like panic-inducing hysteria)
Even if the initial audience doesn’t interpret it that way, that may be how they explain it to their friends. Preppers will bend it to fit and justify their narrative, and so will the all-natural types. That’s just human nature.
I just re-watched Nick Bostrom’s “End of Humanity” TED talk, and am again impressed with his skill at presenting these things in the abstract without triggering any knee-jerk reactions. However, once it enters the public awareness, I expect these sorts of interpretations:
“20% chance of human extinction!” (Perhaps as a sensationalist headline somewhere.)
The idea that more people is better is extremely counter-intuitive for many, especially given the planet’s current overpopulation. Many people have overgeneralized this heuristic. It took me many months of consideration before I eventually came around to Bostrom’s way of thinking, that future lives should be weighted equally to our own. Never being born just doesn’t feel as bad as death, until you get into the philosophical details. I’ve talked to some that would argue that humans are so destructive to nature that the earth would be better without people, and so actively advocated against things like space colonization, even as a backup plan. Of course, this is more of belief in belief than actual belief, since they would never actually take steps toward human extinction.
The idea of colonizing the universe is repulsive to some, who tend to argue that we shouldn’t even consider spreading to other planets until we can fix all the problems we have here first. They get a mental image of humans exhausting all natural resources in reach, and destroying pristine planets. (Running out of asteroids is unlikely, due to the mind bogglingly large amount of material in the asteroid belt alone. If we could manage to deplete all that before the sun enters it’s red giant phase, we’d be more than capable at that point of spreading to other stars to mine here and there instead of depleting any single region.)
Similarly, the idea that technological maturity is a good thing is counter-intuitive for many people, and may evoke a knee-jerk reaction. The media tends not to cover all the good things we’ve accomplished as a species, so most people would not agree that we are on an upward trajectory rather than downward. They are certainly right that some technologies really are bad things, but they don’t see the benefits of vaccines, modern medicine, sanitation, electricity, education, improved working conditions, leisure time, etc. compared to a century ago.
This also fits neatly into the narrative of “liberal scare tactic”, like many people perceive global warming to be.
Many religious people will find this absurd, since a loving god who looked after us would never let us destroy ourselves like this.
If you can narrow the target audience of the ad sufficiently to show it mostly to academics, it’s probably a net gain. However, just seeing it on an ad banner would be a very bad thing, because the default presumption is going to be that someone is making money off of you buying the argument. Are there ways to recommend it to academics on some social network or another, besides ads? Several social networks have “recommended” items, but I don’t know much about the algorithms that do the recommending. I’ve looked into the 3 ad types on YouTube, though:
YouTube “In-stream” ad videos at the beginning of videos are annoying, and are likely to have a net-negative outcome for advocacy groups, even if they are net-gains for selling products.
“In-display” ads (appear on the right of YouTube, marked with yellow “AD”) are much less intrusive, but still might make people wonder what our money-making scheme is. This one might be alright, especially if you can find a way to target academics, or at least intellectuals.
“In-search” ads would be much easier to target toward academics with specific interests, but it would be quite a narrow focus. A less narrow focus might include conspiracy theorists and alarmists, which would be detrimental to our movement. I can only think of a couple good, academic-related search term which would be used by academics unfamiliar with X-risk. Maybe things like “Red List Index”, “IUCN Red List of Threatened Species”, “Holocene extinction”, “Quaternary extinction”, “Cretaceous–Paleogene extinction”, “K–Pg extinction”, “Cretaceous–Tertiary extinction”, “Triassic–Jurassic extinction”, “Tr–J extinction”, “Permian–Triassic extinction”, “P–Tr extinction”, “Late Devonian extinction”, “Ordovician–Silurian extinction”, and “O–S extinction”. I’d avoid less obscure phrases like “background extinction rate”, “K-T extinction”, etc.
Of the ad options on YouTube, I think “in-search” ads would be the most beneficial, if directed only with very specific technical search terms like the ones I suggested. I’m still somewhat concerned about how nature-lovers might misconstrue some of the concepts when first introduced to them. This is all just my attempt to follow the maxipok rule, as Bostrom suggests.
Although the YouTube comments on his “end of humanity” video are much better than the trolling YouTube is infamous for, they do still give you a feel for the range of reactions that people are likely to have. A few are generally agreeable, though. Just remember that the people who react to the title and never watch the video will have much worse reactions.
It’s great to have responses more thought out than one’s original idea!
The people who would misunderstand existential risk, are you thinking it’s better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both? The downside of course is that without publicity you will have fewer resources and brains on the problem.
I agree it is best not to mention far future stuff. People are already familiar with nuclear war, epidemics and AI-trouble(with Gates, Hawking and Musk stating their concern), so existential risk itself isn’t really that unfamiliar.
For the part about people just seeing the title and move on: you can have a suitably vague title, but even if not, what conclusions can they possibly draw from just a title? I don’t think people remember skimming over one.
I have no idea what those search terms mean, but it sounds like a good idea. Perhaps you should run such a campaign?
I’m arguing “both”, but mainly that we don’t need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements:
1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person.
2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people
I’m arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1⁄3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic.
As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible.
The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.
Ah, well paleontologists aren’t exactly our target group.
If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they? Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it’s a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?).
I’m not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven’t read enough in those areas to know what keywords might be used only by the experts.
You mean like zombies coming to eat your brain? There is a large variety of movies and games out.
No, something else? Like large asteroids or maybe aliens coming to kill us? There is a large variety of movies and games out.
Still wrong? Maybe an evil AI? There is a large variety of movies and games out.
And just in case you want people to think, that’s not what advertising does.
The same logic applies to political advertising. If someone advertised existential risk to..well...everyone, but particularly visible voter groups, which I’ve quickly brainstormed here, one could hack a politicians decision calculus into supporting existential risk mitigation.