It’s great to have responses more thought out than one’s original idea!
The people who would misunderstand existential risk, are you thinking it’s better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both?
The downside of course is that without publicity you will have fewer resources and brains on the problem.
I agree it is best not to mention far future stuff. People are already familiar with nuclear war, epidemics and AI-trouble(with Gates, Hawking and Musk stating their concern), so existential risk itself isn’t really that unfamiliar.
For the part about people just seeing the title and move on: you can have a suitably vague title, but even if not, what conclusions can they possibly draw from just a title? I don’t think people remember skimming over one.
I have no idea what those search terms mean, but it sounds like a good idea. Perhaps you should run such a campaign?
I’m arguing “both”, but mainly that we don’t need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements:
1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person.
2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people
I’m arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1⁄3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic.
As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible.
The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.
Ah, well paleontologists aren’t exactly our target group.
If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they?
Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it’s a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?).
I’m not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven’t read enough in those areas to know what keywords might be used only by the experts.
It’s great to have responses more thought out than one’s original idea!
The people who would misunderstand existential risk, are you thinking it’s better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both? The downside of course is that without publicity you will have fewer resources and brains on the problem.
I agree it is best not to mention far future stuff. People are already familiar with nuclear war, epidemics and AI-trouble(with Gates, Hawking and Musk stating their concern), so existential risk itself isn’t really that unfamiliar.
For the part about people just seeing the title and move on: you can have a suitably vague title, but even if not, what conclusions can they possibly draw from just a title? I don’t think people remember skimming over one.
I have no idea what those search terms mean, but it sounds like a good idea. Perhaps you should run such a campaign?
I’m arguing “both”, but mainly that we don’t need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements:
1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person.
2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people
I’m arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1⁄3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic.
As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible.
The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.
Ah, well paleontologists aren’t exactly our target group.
If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they? Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it’s a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?).
I’m not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven’t read enough in those areas to know what keywords might be used only by the experts.