Hm, so maybe a highly distilled version of my model here is that EAs tend to come from a worldview of trying to do the most good, whereas rationalists tend to come from a world view of Getting the Right Answer. I think the latter is more useful for preventing AI x-risk. (Though to be very clear, the former is also hugely laudable, and we need orders of magnitude more of both types of people active in the world; I’m just wondering if we’re leaving value on the table by not having a rationalist funnel specifically).
I think I get what you’re saying now; let me try to rephrase. We want to grow the “think good and do good” community. We have a lot of let’s say “recruitment material” that appeals to people’s sense of do-gooding, so unaligned people that vaguely want to do good might trip over the material and get recruited. But we have less of that on the think-gooding side, so there’s a larger gap of unaligned people who want to think good that we could recruit.
Does that seem right?
Where does the Atlas fellowship fall on your scale of “recruits do-gooders” versus “recruits think-gooders”?
Hm, so maybe a highly distilled version of my model here is that EAs tend to come from a worldview of trying to do the most good, whereas rationalists tend to come from a world view of Getting the Right Answer. I think the latter is more useful for preventing AI x-risk. (Though to be very clear, the former is also hugely laudable, and we need orders of magnitude more of both types of people active in the world; I’m just wondering if we’re leaving value on the table by not having a rationalist funnel specifically).
I think I get what you’re saying now; let me try to rephrase. We want to grow the “think good and do good” community. We have a lot of let’s say “recruitment material” that appeals to people’s sense of do-gooding, so unaligned people that vaguely want to do good might trip over the material and get recruited. But we have less of that on the think-gooding side, so there’s a larger gap of unaligned people who want to think good that we could recruit.
Does that seem right?
Where does the Atlas fellowship fall on your scale of “recruits do-gooders” versus “recruits think-gooders”?