You seem like a very down to earth guy, MarsColony_in10years :)
I’m not sure X-risk needs to be complicated though. The basics is just “Future technology may be dangerous and needs to be studied more”. That should be enough to support the cause. One doesn’t need to, and I don’t think Bostrom does, go into the complicated things you mentioned.
The part in Bostrom’s video where he talks about future people colonizing the galaxy and uploading themselves into computers and reach a post human condition should probably be cut for mainstream viewers, and maybe the expected utility calculations, other than that I don’t see what could turn people off?
“Future technology may be dangerous and needs to be studied more”
That’s a reasonably safe statement, but I can still see it misconstrued as
“Technology is bad.” (sounds vaguely like a particular flavor of liberal flag waving, so some types of conservatives may react with pro-economic growth flag-waving)
“The end is nigh!” (sounds like panic-inducing hysteria)
Even if the initial audience doesn’t interpret it that way, that may be how they explain it to their friends. Preppers will bend it to fit and justify their narrative, and so will the all-natural types. That’s just human nature.
I just re-watched Nick Bostrom’s “End of Humanity” TED talk, and am again impressed with his skill at presenting these things in the abstract without triggering any knee-jerk reactions. However, once it enters the public awareness, I expect these sorts of interpretations:
“20% chance of human extinction!” (Perhaps as a sensationalist headline somewhere.)
The idea that more people is better is extremely counter-intuitive for many, especially given the planet’s current overpopulation. Many people have overgeneralized this heuristic. It took me many months of consideration before I eventually came around to Bostrom’s way of thinking, that future lives should be weighted equally to our own. Never being born just doesn’t feel as bad as death, until you get into the philosophical details. I’ve talked to some that would argue that humans are so destructive to nature that the earth would be better without people, and so actively advocated against things like space colonization, even as a backup plan. Of course, this is more of belief in belief than actual belief, since they would never actually take steps toward human extinction.
The idea of colonizing the universe is repulsive to some, who tend to argue that we shouldn’t even consider spreading to other planets until we can fix all the problems we have here first. They get a mental image of humans exhausting all natural resources in reach, and destroying pristine planets. (Running out of asteroids is unlikely, due to the mind bogglingly large amount of material in the asteroid belt alone. If we could manage to deplete all that before the sun enters it’s red giant phase, we’d be more than capable at that point of spreading to other stars to mine here and there instead of depleting any single region.)
Similarly, the idea that technological maturity is a good thing is counter-intuitive for many people, and may evoke a knee-jerk reaction. The media tends not to cover all the good things we’ve accomplished as a species, so most people would not agree that we are on an upward trajectory rather than downward. They are certainly right that some technologies really are bad things, but they don’t see the benefits of vaccines, modern medicine, sanitation, electricity, education, improved working conditions, leisure time, etc. compared to a century ago.
This also fits neatly into the narrative of “liberal scare tactic”, like many people perceive global warming to be.
Many religious people will find this absurd, since a loving god who looked after us would never let us destroy ourselves like this.
If you can narrow the target audience of the ad sufficiently to show it mostly to academics, it’s probably a net gain. However, just seeing it on an ad banner would be a very bad thing, because the default presumption is going to be that someone is making money off of you buying the argument. Are there ways to recommend it to academics on some social network or another, besides ads? Several social networks have “recommended” items, but I don’t know much about the algorithms that do the recommending. I’ve looked into the 3 ad types on YouTube, though:
YouTube “In-stream” ad videos at the beginning of videos are annoying, and are likely to have a net-negative outcome for advocacy groups, even if they are net-gains for selling products.
“In-display” ads (appear on the right of YouTube, marked with yellow “AD”) are much less intrusive, but still might make people wonder what our money-making scheme is. This one might be alright, especially if you can find a way to target academics, or at least intellectuals.
“In-search” ads would be much easier to target toward academics with specific interests, but it would be quite a narrow focus. A less narrow focus might include conspiracy theorists and alarmists, which would be detrimental to our movement. I can only think of a couple good, academic-related search term which would be used by academics unfamiliar with X-risk. Maybe things like “Red List Index”, “IUCN Red List of Threatened Species”, “Holocene extinction”, “Quaternary extinction”, “Cretaceous–Paleogene extinction”, “K–Pg extinction”, “Cretaceous–Tertiary extinction”, “Triassic–Jurassic extinction”, “Tr–J extinction”, “Permian–Triassic extinction”, “P–Tr extinction”, “Late Devonian extinction”, “Ordovician–Silurian extinction”, and “O–S extinction”. I’d avoid less obscure phrases like “background extinction rate”, “K-T extinction”, etc.
Of the ad options on YouTube, I think “in-search” ads would be the most beneficial, if directed only with very specific technical search terms like the ones I suggested. I’m still somewhat concerned about how nature-lovers might misconstrue some of the concepts when first introduced to them. This is all just my attempt to follow the maxipok rule, as Bostrom suggests.
Although the YouTube comments on his “end of humanity” video are much better than the trolling YouTube is infamous for, they do still give you a feel for the range of reactions that people are likely to have. A few are generally agreeable, though. Just remember that the people who react to the title and never watch the video will have much worse reactions.
It’s great to have responses more thought out than one’s original idea!
The people who would misunderstand existential risk, are you thinking it’s better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both?
The downside of course is that without publicity you will have fewer resources and brains on the problem.
I agree it is best not to mention far future stuff. People are already familiar with nuclear war, epidemics and AI-trouble(with Gates, Hawking and Musk stating their concern), so existential risk itself isn’t really that unfamiliar.
For the part about people just seeing the title and move on: you can have a suitably vague title, but even if not, what conclusions can they possibly draw from just a title? I don’t think people remember skimming over one.
I have no idea what those search terms mean, but it sounds like a good idea. Perhaps you should run such a campaign?
I’m arguing “both”, but mainly that we don’t need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements:
1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person.
2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people
I’m arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1⁄3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic.
As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible.
The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.
Ah, well paleontologists aren’t exactly our target group.
If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they?
Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it’s a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?).
I’m not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven’t read enough in those areas to know what keywords might be used only by the experts.
You seem like a very down to earth guy, MarsColony_in10years :)
I’m not sure X-risk needs to be complicated though. The basics is just “Future technology may be dangerous and needs to be studied more”. That should be enough to support the cause. One doesn’t need to, and I don’t think Bostrom does, go into the complicated things you mentioned.
The part in Bostrom’s video where he talks about future people colonizing the galaxy and uploading themselves into computers and reach a post human condition should probably be cut for mainstream viewers, and maybe the expected utility calculations, other than that I don’t see what could turn people off?
That’s a reasonably safe statement, but I can still see it misconstrued as
“Technology is bad.” (sounds vaguely like a particular flavor of liberal flag waving, so some types of conservatives may react with pro-economic growth flag-waving)
“The end is nigh!” (sounds like panic-inducing hysteria)
Even if the initial audience doesn’t interpret it that way, that may be how they explain it to their friends. Preppers will bend it to fit and justify their narrative, and so will the all-natural types. That’s just human nature.
I just re-watched Nick Bostrom’s “End of Humanity” TED talk, and am again impressed with his skill at presenting these things in the abstract without triggering any knee-jerk reactions. However, once it enters the public awareness, I expect these sorts of interpretations:
“20% chance of human extinction!” (Perhaps as a sensationalist headline somewhere.)
The idea that more people is better is extremely counter-intuitive for many, especially given the planet’s current overpopulation. Many people have overgeneralized this heuristic. It took me many months of consideration before I eventually came around to Bostrom’s way of thinking, that future lives should be weighted equally to our own. Never being born just doesn’t feel as bad as death, until you get into the philosophical details. I’ve talked to some that would argue that humans are so destructive to nature that the earth would be better without people, and so actively advocated against things like space colonization, even as a backup plan. Of course, this is more of belief in belief than actual belief, since they would never actually take steps toward human extinction.
The idea of colonizing the universe is repulsive to some, who tend to argue that we shouldn’t even consider spreading to other planets until we can fix all the problems we have here first. They get a mental image of humans exhausting all natural resources in reach, and destroying pristine planets. (Running out of asteroids is unlikely, due to the mind bogglingly large amount of material in the asteroid belt alone. If we could manage to deplete all that before the sun enters it’s red giant phase, we’d be more than capable at that point of spreading to other stars to mine here and there instead of depleting any single region.)
Similarly, the idea that technological maturity is a good thing is counter-intuitive for many people, and may evoke a knee-jerk reaction. The media tends not to cover all the good things we’ve accomplished as a species, so most people would not agree that we are on an upward trajectory rather than downward. They are certainly right that some technologies really are bad things, but they don’t see the benefits of vaccines, modern medicine, sanitation, electricity, education, improved working conditions, leisure time, etc. compared to a century ago.
This also fits neatly into the narrative of “liberal scare tactic”, like many people perceive global warming to be.
Many religious people will find this absurd, since a loving god who looked after us would never let us destroy ourselves like this.
If you can narrow the target audience of the ad sufficiently to show it mostly to academics, it’s probably a net gain. However, just seeing it on an ad banner would be a very bad thing, because the default presumption is going to be that someone is making money off of you buying the argument. Are there ways to recommend it to academics on some social network or another, besides ads? Several social networks have “recommended” items, but I don’t know much about the algorithms that do the recommending. I’ve looked into the 3 ad types on YouTube, though:
YouTube “In-stream” ad videos at the beginning of videos are annoying, and are likely to have a net-negative outcome for advocacy groups, even if they are net-gains for selling products.
“In-display” ads (appear on the right of YouTube, marked with yellow “AD”) are much less intrusive, but still might make people wonder what our money-making scheme is. This one might be alright, especially if you can find a way to target academics, or at least intellectuals.
“In-search” ads would be much easier to target toward academics with specific interests, but it would be quite a narrow focus. A less narrow focus might include conspiracy theorists and alarmists, which would be detrimental to our movement. I can only think of a couple good, academic-related search term which would be used by academics unfamiliar with X-risk. Maybe things like “Red List Index”, “IUCN Red List of Threatened Species”, “Holocene extinction”, “Quaternary extinction”, “Cretaceous–Paleogene extinction”, “K–Pg extinction”, “Cretaceous–Tertiary extinction”, “Triassic–Jurassic extinction”, “Tr–J extinction”, “Permian–Triassic extinction”, “P–Tr extinction”, “Late Devonian extinction”, “Ordovician–Silurian extinction”, and “O–S extinction”. I’d avoid less obscure phrases like “background extinction rate”, “K-T extinction”, etc.
Of the ad options on YouTube, I think “in-search” ads would be the most beneficial, if directed only with very specific technical search terms like the ones I suggested. I’m still somewhat concerned about how nature-lovers might misconstrue some of the concepts when first introduced to them. This is all just my attempt to follow the maxipok rule, as Bostrom suggests.
Although the YouTube comments on his “end of humanity” video are much better than the trolling YouTube is infamous for, they do still give you a feel for the range of reactions that people are likely to have. A few are generally agreeable, though. Just remember that the people who react to the title and never watch the video will have much worse reactions.
It’s great to have responses more thought out than one’s original idea!
The people who would misunderstand existential risk, are you thinking it’s better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both? The downside of course is that without publicity you will have fewer resources and brains on the problem.
I agree it is best not to mention far future stuff. People are already familiar with nuclear war, epidemics and AI-trouble(with Gates, Hawking and Musk stating their concern), so existential risk itself isn’t really that unfamiliar.
For the part about people just seeing the title and move on: you can have a suitably vague title, but even if not, what conclusions can they possibly draw from just a title? I don’t think people remember skimming over one.
I have no idea what those search terms mean, but it sounds like a good idea. Perhaps you should run such a campaign?
I’m arguing “both”, but mainly that we don’t need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements:
1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person.
2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people
I’m arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1⁄3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic.
As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible.
The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.
Ah, well paleontologists aren’t exactly our target group.
If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they? Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it’s a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?).
I’m not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven’t read enough in those areas to know what keywords might be used only by the experts.