There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
With consideration #1 in mind, in deciding whether to support x-risk interventions, one has to consider room for more funding and marginal diminishing returns on investment.
(I recognize that the claims in this comment aren’t present in the comment that you responded to, and that I’m introducing them anew here.)
Mm, I’m not sure what the intended import of your statement is, can we be more concrete? This sounds like something I would say in explaining why I directed some of my life effort toward CFAR—along with, “Because I found that really actually in practice the number of rationalists seemed like a sharp limiting factor on the growth of x-risk efforts, if I’d picked something lofty-sounding in theory that was supposed to have a side impact I probably wouldn’t have guessed as well” and “Keeping in mind that the top people at CFAR are explicitly x-risk aware and think of that impact as part of their job”.
Something along the lines of CFAR could fit the bill. I suspect CFAR could have a bigger impact if it targeted people with stronger focus on global welfare, and/or people with greater influence, than the typical CFAR participant. But I recognize that CFAR is still in a nascent stage, so that it’s necessary to cooptimize for the development of content, and growth.
I believe that there are other interventions that would also fit the bill, which I’ll describe in later posts.
CFAR is indeed so cooptimizing and trying to maximize net impact over time; if you think that a different mix would produce a greater net impact, make the case! CFAR isn’t a side-effect project where you just have to cross your fingers and hope that sort of thing happens by coincidence while the leaders are thinking about something else, it’s explicitly aimed that way.
There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.
It’s important to note that:
There may be highly targeted interventions (other than x-risk reduction efforts) which can have big trajectory changes (including indirectly improving humans’ ability to address x-risks).
With consideration #1 in mind, in deciding whether to support x-risk interventions, one has to consider room for more funding and marginal diminishing returns on investment.
(I recognize that the claims in this comment aren’t present in the comment that you responded to, and that I’m introducing them anew here.)
Mm, I’m not sure what the intended import of your statement is, can we be more concrete? This sounds like something I would say in explaining why I directed some of my life effort toward CFAR—along with, “Because I found that really actually in practice the number of rationalists seemed like a sharp limiting factor on the growth of x-risk efforts, if I’d picked something lofty-sounding in theory that was supposed to have a side impact I probably wouldn’t have guessed as well” and “Keeping in mind that the top people at CFAR are explicitly x-risk aware and think of that impact as part of their job”.
Something along the lines of CFAR could fit the bill. I suspect CFAR could have a bigger impact if it targeted people with stronger focus on global welfare, and/or people with greater influence, than the typical CFAR participant. But I recognize that CFAR is still in a nascent stage, so that it’s necessary to cooptimize for the development of content, and growth.
I believe that there are other interventions that would also fit the bill, which I’ll describe in later posts.
CFAR is indeed so cooptimizing and trying to maximize net impact over time; if you think that a different mix would produce a greater net impact, make the case! CFAR isn’t a side-effect project where you just have to cross your fingers and hope that sort of thing happens by coincidence while the leaders are thinking about something else, it’s explicitly aimed that way.
This is, more or less, the intended purpose behind spending all this energy on studying rationality rather than directly researching FAI. I’m not saying I agree with that reasoning, by the way. But that was the initial reasoning behind Less Wrong, for better or worse. Would we be farther ahead if rather than working on rationality, Eliezer started working immediately on FAI? Maybe, but but likely not. I could see it being argued both ways. But anyway, this shows an actual, very concrete, example of this kind of intervention.