The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.
The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.