The naive thing to do for existential risk reduction seems like: make a big list of all the existential risks, then identify interventions for reducing every risk, order interventions by cost-effectiveness, and work on the most cost-effective interventions. Has anyone done this? Any thoughts on whether it would be worth doing?
Bostrom’s book ‘Global Catastrophic Risks’ does the first two of your list. The other two are harder. One issue is lack of information about organisations currently working in this space. If I remember correctly, Nick Beckstead at Rutgers is compiling a list. Another is interrelationships between risks—the GCR Institute is doing work on this aspect.
Yet another issue is that a lot of existential risks are difficult to solve with ‘interventions’ as we might understand the term in, say, extreme poverty reduction. While one can donate money to AMF and send out antimalarial bednets, it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases. Indeed many of these problems can only be tackled by government action, because it requires regulation or because of the cost of the prevention device (i.e. an asteroid deflector). However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.
it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.
However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.
Does anyone reading LW know stuff about political advocacy and lobbying? Is there a Web 2.0 “lobbyist as a service” company yet? ;)
Are there ways we can craft memes to co-opt existing political factions? I doubt we’d be able to infect, say, most of the US democractic party with the entire effective altruism memeplex, but perhaps a single meme could make a splash with good timing and a clever, sticky message.
Is there any risk of “poisoning the well” with an amateurish lobbying effort? If we can get Nick Bostrom or similar to present to legislators on a topic, they’ll probably be taken seriously, but a half-hearted attempt from no-names might not be.
Is there any risk of “poisoning the well” with an amateurish lobbying effort?
E.g. annoyance towards the overenthusiastic amateurs wasting the time of a researcher who knows the field and issues better than they do seems plausible. Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.
Low-quality or otherwise low-investment attempts at convincing people to make major life changes seem to me to run a strong risk of setting up later attempts for the one argument against an army failure mode. Remember that the people you’re trying to convince aren’t perfect rationalists.
(And I’m not sure that convincing a few researchers would be an improvement, let alone a large one.)
Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.
Only if they buy the argument in the first place. Have any “synthetic biology” researchers ever been convinced by such arguments?
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.
Systematically emailing researchers runs the risk of being pattern matched to crank spam. If I were a respected biologist, a better plan might be to
write a short (500-1500 words) editorial that communicates the strongest arguments with the least inferential distance, and sign it
get other recognized scientists to sign it
contact the editors of Science, Nature, and PNAS and ask whether they’d like to publish it
if step 3 works, try to get an interview or segment on those journals’ podcasts (allthreehave podcasts), and try putting out a press release
if step 3 fails, try getting a more specific journal like Cell or Nature Genetics to publish it
Some of these steps could of course be expanded or reordered (for example, it might be quicker to get a less famous journal to publish an editorial, and then use that as a stepping stone into Science/Nature/PNAS). I’m also ignoring the possibility that synthetic biologists have already considered risks of their work, and would react badly to being nagged (however professionally) about it.
Edit:Martin Rees got an editorial into Science about catastrophic risk just a few weeks ago, which is minor evidence that this kind of approach can work.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?
That might convince a few ones on the margin, but I doubt it would convince the bulk of them—especially the most dangerous ones, I guess.
People like Bostrom and Martin Rees are certainly engaged in raising public awareness through the media. There’s extensive lobbying on some risks, like global warming, nuclear weapons and asteroid defence. In relation to bio/nano/AI the most important thing to do at the moment is research—lobbying should wait until it’s clearer what should be done. Although perhaps not—look at the mess over flu research.
It seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field?
One of the last serious attempts to prevent large-scale memetic engineering was the unabomber.
The effort apparently failed—the memes have continued their march unabated.
It’s worth doing but very hard. GCR is a first stab at this, but really it’s going to take 20 times that amount of effort to make a first pass at the project you describe, and there just aren’t that many researchers seriously trying to do this kind of thing. Even if CSER takes off and MIRI and FHI both expand their research programs, I’d expect it to be at least another decade before that much work has been done.
It feels like more research on this issue would have the effect of gradually improving the clarity of the existential risk picture. Do you think the current picture is sufficiently unclear that most potential interventions might backfire? Given limited resources, perhaps the best path is to do targeted investigation of what appear to be the most promising interventions and stop as soon as one that seems highly unlikely to backfire is identified, or something like that.
What level of clarity is represented by a “first pass”?
The naive thing to do for existential risk reduction seems like: make a big list of all the existential risks, then identify interventions for reducing every risk, order interventions by cost-effectiveness, and work on the most cost-effective interventions. Has anyone done this? Any thoughts on whether it would be worth doing?
Bostrom’s book ‘Global Catastrophic Risks’ does the first two of your list. The other two are harder. One issue is lack of information about organisations currently working in this space. If I remember correctly, Nick Beckstead at Rutgers is compiling a list. Another is interrelationships between risks—the GCR Institute is doing work on this aspect.
Yet another issue is that a lot of existential risks are difficult to solve with ‘interventions’ as we might understand the term in, say, extreme poverty reduction. While one can donate money to AMF and send out antimalarial bednets, it seems harder to think of the equivalent for preventing the accidental or deliberate release of synthetic diseases. Indeed many of these problems can only be tackled by government action, because it requires regulation or because of the cost of the prevention device (i.e. an asteroid deflector). However its no secret that the cost-effectiveness of political advocacy is really hard to measure, which is perhaps why its been underanalysed in the Effective Altruism community.
Thanks for reminding me about GCR.
OK, how about really easy stuff, like systematically emailing researchers involved with synthetic biology and trying to convince them to reconsider their choice of field? Feels like ideally we would get someone who knew stuff about biology (and ideally had some level of respect in the biology community) to do this.
Does anyone reading LW know stuff about political advocacy and lobbying? Is there a Web 2.0 “lobbyist as a service” company yet? ;)
Are there ways we can craft memes to co-opt existing political factions? I doubt we’d be able to infect, say, most of the US democractic party with the entire effective altruism memeplex, but perhaps a single meme could make a splash with good timing and a clever, sticky message.
Is there any risk of “poisoning the well” with an amateurish lobbying effort? If we can get Nick Bostrom or similar to present to legislators on a topic, they’ll probably be taken seriously, but a half-hearted attempt from no-names might not be.
E.g. annoyance towards the overenthusiastic amateurs wasting the time of a researcher who knows the field and issues better than they do seems plausible. Also, efforts to persuade researchers to leave the field seems most likely to work on the most responsible ones, leaving the more reckless researchers to dominate the field, which could reduce the social norms related to precaution-taking in the field overall.
Low-quality or otherwise low-investment attempts at convincing people to make major life changes seem to me to run a strong risk of setting up later attempts for the one argument against an army failure mode. Remember that the people you’re trying to convince aren’t perfect rationalists.
(And I’m not sure that convincing a few researchers would be an improvement, let alone a large one.)
Only if they buy the argument in the first place. Have any “synthetic biology” researchers ever been convinced by such arguments?
Were there any relatively uninformed amateurs that played a role in convincing EY that AI friendliness was an issue?
Systematically emailing researchers runs the risk of being pattern matched to crank spam. If I were a respected biologist, a better plan might be to
write a short (500-1500 words) editorial that communicates the strongest arguments with the least inferential distance, and sign it
get other recognized scientists to sign it
contact the editors of Science, Nature, and PNAS and ask whether they’d like to publish it
if step 3 works, try to get an interview or segment on those journals’ podcasts (all three have podcasts), and try putting out a press release
if step 3 fails, try getting a more specific journal like Cell or Nature Genetics to publish it
Some of these steps could of course be expanded or reordered (for example, it might be quicker to get a less famous journal to publish an editorial, and then use that as a stepping stone into Science/Nature/PNAS). I’m also ignoring the possibility that synthetic biologists have already considered risks of their work, and would react badly to being nagged (however professionally) about it.
Edit: Martin Rees got an editorial into Science about catastrophic risk just a few weeks ago, which is minor evidence that this kind of approach can work.
That might convince a few ones on the margin, but I doubt it would convince the bulk of them—especially the most dangerous ones, I guess.
People like Bostrom and Martin Rees are certainly engaged in raising public awareness through the media. There’s extensive lobbying on some risks, like global warming, nuclear weapons and asteroid defence. In relation to bio/nano/AI the most important thing to do at the moment is research—lobbying should wait until it’s clearer what should be done. Although perhaps not—look at the mess over flu research.
One of the last serious attempts to prevent large-scale memetic engineering was the unabomber.
The effort apparently failed—the memes have continued their march unabated.
It’s worth doing but very hard. GCR is a first stab at this, but really it’s going to take 20 times that amount of effort to make a first pass at the project you describe, and there just aren’t that many researchers seriously trying to do this kind of thing. Even if CSER takes off and MIRI and FHI both expand their research programs, I’d expect it to be at least another decade before that much work has been done.
It feels like more research on this issue would have the effect of gradually improving the clarity of the existential risk picture. Do you think the current picture is sufficiently unclear that most potential interventions might backfire? Given limited resources, perhaps the best path is to do targeted investigation of what appear to be the most promising interventions and stop as soon as one that seems highly unlikely to backfire is identified, or something like that.
What level of clarity is represented by a “first pass”?
It would be worth doing, and has been done, to some degree.
There are a few steps missing in between, such as identifying causes of the risks, rating them by likelihood and odds of wiping human species, etc.