I see Nick’s post as pointing out a nontrivial minimum threshold that x-risk reduction opportunities need to meet in order to be more promising than broad interventions, even within the astronomical waste framework. I agree that you have to look at the particulars of the x-risk reduction opportunities, and of the broad intervention opportunities, that are on the table, in order to argue for focus on broad interventions. But that’s a longer discussion.
I agree but remark that so long as at least one x-risk reduction effort meets this minimum threshold, we can discard all non-xrisk considerations and compare only x-risk impacts to x-risk impacts, which is how I usually think in practice. The question “Can we reduce all impacts to probability of okayness?” seems separate from “Are there mundane-seeming projects which can achieve comparably sized xrisk impacts per dollar as side effects?”, and neither tells us to consider non-xrisk impacts of projects. This is the main thrust of the astronomical waste argument and it seems to me that this still goes through.
There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
With consideration #1 in mind, in deciding whether to support x-risk interventions, one has to consider room for more funding and marginal diminishing returns on investment.
(I recognize that the claims in this comment aren’t present in the comment that you responded to, and that I’m introducing them anew here.)
Mm, I’m not sure what the intended import of your statement is, can we be more concrete? This sounds like something I would say in explaining why I directed some of my life effort toward CFAR—along with, “Because I found that really actually in practice the number of rationalists seemed like a sharp limiting factor on the growth of x-risk efforts, if I’d picked something lofty-sounding in theory that was supposed to have a side impact I probably wouldn’t have guessed as well” and “Keeping in mind that the top people at CFAR are explicitly x-risk aware and think of that impact as part of their job”.
Something along the lines of CFAR could fit the bill. I suspect CFAR could have a bigger impact if it targeted people with stronger focus on global welfare, and/or people with greater influence, than the typical CFAR participant. But I recognize that CFAR is still in a nascent stage, so that it’s necessary to cooptimize for the development of content, and growth.
I believe that there are other interventions that would also fit the bill, which I’ll describe in later posts.
CFAR is indeed so cooptimizing and trying to maximize net impact over time; if you think that a different mix would produce a greater net impact, make the case! CFAR isn’t a side-effect project where you just have to cross your fingers and hope that sort of thing happens by coincidence while the leaders are thinking about something else, it’s explicitly aimed that way.
There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.
Another issue is that if you accept the claims in the post, when you are comparing the ripple effects of different interventions, you can’t just compare the ripple effects on x-risk. Ripple effects on other trajectory changes are non-negligible as well.
I see Nick’s post as pointing out a nontrivial minimum threshold that x-risk reduction opportunities need to meet in order to be more promising than broad interventions, even within the astronomical waste framework. I agree that you have to look at the particulars of the x-risk reduction opportunities, and of the broad intervention opportunities, that are on the table, in order to argue for focus on broad interventions. But that’s a longer discussion.
I agree but remark that so long as at least one x-risk reduction effort meets this minimum threshold, we can discard all non-xrisk considerations and compare only x-risk impacts to x-risk impacts, which is how I usually think in practice. The question “Can we reduce all impacts to probability of okayness?” seems separate from “Are there mundane-seeming projects which can achieve comparably sized xrisk impacts per dollar as side effects?”, and neither tells us to consider non-xrisk impacts of projects. This is the main thrust of the astronomical waste argument and it seems to me that this still goes through.
It’s important to note that:
There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
With consideration #1 in mind, in deciding whether to support x-risk interventions, one has to consider room for more funding and marginal diminishing returns on investment.
(I recognize that the claims in this comment aren’t present in the comment that you responded to, and that I’m introducing them anew here.)
Mm, I’m not sure what the intended import of your statement is, can we be more concrete? This sounds like something I would say in explaining why I directed some of my life effort toward CFAR—along with, “Because I found that really actually in practice the number of rationalists seemed like a sharp limiting factor on the growth of x-risk efforts, if I’d picked something lofty-sounding in theory that was supposed to have a side impact I probably wouldn’t have guessed as well” and “Keeping in mind that the top people at CFAR are explicitly x-risk aware and think of that impact as part of their job”.
Something along the lines of CFAR could fit the bill. I suspect CFAR could have a bigger impact if it targeted people with stronger focus on global welfare, and/or people with greater influence, than the typical CFAR participant. But I recognize that CFAR is still in a nascent stage, so that it’s necessary to cooptimize for the development of content, and growth.
I believe that there are other interventions that would also fit the bill, which I’ll describe in later posts.
CFAR is indeed so cooptimizing and trying to maximize net impact over time; if you think that a different mix would produce a greater net impact, make the case! CFAR isn’t a side-effect project where you just have to cross your fingers and hope that sort of thing happens by coincidence while the leaders are thinking about something else, it’s explicitly aimed that way.
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.
Another issue is that if you accept the claims in the post, when you are comparing the ripple effects of different interventions, you can’t just compare the ripple effects on x-risk. Ripple effects on other trajectory changes are non-negligible as well.
I agree with Jonah’s point and think my post supports it.