What do you think is MIRI’s probability of having been valuable, conditioned on a nice intergalactic future being true?
More than 10%, definitely. Maybe 50%?
A non-exhaustive list of some reasons why I strongly disagree with this combination of views
Not that it should be used to dismiss any of your arguments, but reading your other comments in this thread I thought you must be playing devil’s advocate. Your phrasing here seems to preclude that possibility.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I’m puzzled. Let’s consider an edge case: even if MIRI’s factual research turned out to be strictly non-contributing to an eventual solution, there’s no reasonable doubt that it has raised awareness of the issue significantly (in relative terms).
Would the current situation with the CSER or FHI be unchanged or better if MIRI had never existed? Do you think those have a good chance of being valuable in bringing about a good outcome? Answering ‘no’ to the former and ‘yes’ to the latter would transitively imply that MIRI is valuable as well.
I.e. that alone—nevermind actual research contributions—would make it valuable in hindsight, given an eventual positive outcome. Yet you’re strongly opposed to that view?
The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.
Not that it should be used to dismiss any of your arguments, but reading your other comments in this thread I thought you must be playing devil’s advocate. Your phrasing here seems to preclude that possibility.
If you are so strongly convinced that while AGI is a non-negligible x-risk, MIRI will probably turn out to have been without value even if a good AGI outcome were to be eventually achieved, why are you a research fellow there?
I’m puzzled. Let’s consider an edge case: even if MIRI’s factual research turned out to be strictly non-contributing to an eventual solution, there’s no reasonable doubt that it has raised awareness of the issue significantly (in relative terms).
Would the current situation with the CSER or FHI be unchanged or better if MIRI had never existed? Do you think those have a good chance of being valuable in bringing about a good outcome? Answering ‘no’ to the former and ‘yes’ to the latter would transitively imply that MIRI is valuable as well.
I.e. that alone—nevermind actual research contributions—would make it valuable in hindsight, given an eventual positive outcome. Yet you’re strongly opposed to that view?
The “combination of views” includes both high probability of doom, and quite high probability of MIRI making the counterfactual difference given survival. The points I listed address both.
I think MIRI’s expected impact is positive and worthwhile. I’m glad that it exists, and that it and Eliezer specifically have made the contributions they have relative to a world in which they never existed. A small share of the value of the AI safety cause can be quite great. That is quite consistent with thinking that “medium probability” is a big overestimate for MIRI making the counterfactual difference, or that civilization is almost certainly doomed from AI risk otherwise.
Lots of interventions are worthwhile even if a given organization working on them is unlikely to make the counterfactual difference. Most research labs working on malaria vaccines won’t invent one, most political activists won’t achieve big increases in foreign aid or immigration levels or swing an election, most counterproliferation expenditures won’t avert nuclear war, asteroid tracking was known ex ante to be far more likely to discover we were safe than that there was an asteroid on its way and ready to be stopped by a space mission.
The threshold for an x-risk charity of moderate scale to be worth funding is not a 10% chance of literally counterfactually saving the world from existential catastrophe. Annual world GDP is $80,000,000,000,000, and wealth including human capital and the like will be in the quadrillions of dollars. A 10% chance of averting x-risk would be worth trillions of present dollars.
We’ve spent tens of billions of dollars on nuclear and bio risks, and even $100,000,000+ on asteroids (averting dinosaur-killer risk on the order of 1 in 100,000,000 per annum). At that exchange rate again a 10% x-risk impact would be worth trillions of dollars, and governments and philanthropists have shown that they are ready to spend on x-risk or GCR opportunities far, far less likely to make a counterfactual difference than 10%.
I see. We just used different thresholds for valuable, you used “high probability of MIRI making the counterfactual difference given survival”, while for me just e.g. speeding Norvig/Gates/whoever a couple years along the path until they devote efforts to FAI would be valuable, even if it were unlikely to Make The Difference (tm).
Whoever would turn out to have solved the problem, it’s unlikely that their AI safety evaluation process (“Should I do this thing?”) would work in a strict vacuum, i.e. whoever will one day have evaluated the topic and made up their mind to Save The World will be highly likely to have stumbled upon MIRI’s foundational work. Given that at least some of the steps in solving the problem are likely to be quite serial (sequential) in nature, the expected scenario would be that MIRI’s legacy would at least provide some speed-up; a contribution which, again, I’d call valuable, even if it were unlikely to make or break the future.
If the Gates Foundation had someone evaluate the evidence for AI-related x-risk right now, you probably wouldn’t expect MIRI research, AI researcher polls, philosophical essays etc. to be wholly disregarded.
I used that threshold because the numbers being thrown around in the thread were along those lines, and are needed for the “medium probability” referred to in the OP. So counterfactual impact of MIRI never having existed on x-risk is the main measure under discussion here. I erred in quoting your sentence in a way that might have made that hard to interpret.
That’s right, and one reason that I think that MIRI’s existence has reduced expected x-risk, although by less than a 10% probability.