I work as a grantmaker on the Global Catastrophic Risks Capacity-Building team at Open Philanthropy; a large part of our funding portfolio is aimed at increasing the human capital and knowledge base directed at AI safety. I previously worked on several of Open Phil’s grants to Lightcone.
As part of my team’s work, we spend a good deal of effort forming views about which interventions have or have not been important historically for the goals described in my first paragraph. I think LessWrong and the Alignment Forum have been strongly positive for these goals historically, and think they’ll likely continue to be at least into the medium term.
Good Ventures’ decision to exit this broad space meant that Open Phil didn’t reach a decision on whether & how much to continue funding Lightcone; I’m not sure where we would have landed there. However, I do think that for many readers who resonate with Lightcone’s goals and approach to GCR x-risk work, it’s reasonable to think this is among their best donation opportunities. Below I’ll describe some of my evidence and thinking.
Surveys: The top-level post describes surveys we ran in 2020 and 2023. I think these provide good evidence that LessWrong (and the Alignment Forum) have had a lot of impact on the career trajectories & work of folks in AI safety.
The methodology behind the cost-effectiveness estimates in the top-level post broadly makes sense to me, though I’d emphasize the roughness of this kind of calculation.
In general I think one should also watch the absolute impact in addition to the cost-effectiveness calculations, since cost-effectiveness can be non-robust in cases where N is small / you have few data points (i.e. few people interacted with a given program). In this case N seems large enough that I don’t worry much about robustness.
This whole approach does not really take into account negative impacts. We did ask people about these, but: a) the respondents are selected for having been positively impacted because they’re taking our survey at all, and b) for various other reasons, I’m skeptical of this methodology capturing negative impacts well.
So I think there’s reasonable room for disagreement here, if e.g. you think something like, “yes important discussions happen here, but it would be better if they happened on some other platform for <reason>.” Discussion then becomes about the counterfactual other platform.
More methodological detail, for the curious:
These were invite-only surveys, and we aimed to invite many of the people we thought were doing the most promising work on global catastrophic risk reduction (e.g. AI safety) across many areas, and for whom important influences and trajectory-boosting effects might have happened recently.
In 2020, we got ~200 respondents; in 2023, we got ~350.
Other thoughts:
I think a “common-sense” view backs up this empirical evidence quite well: LW/AF is the main place on the public internet where in-depth discussions about e.g. AI safety research agendas happen, and increasingly I see links to articles here “in the wild” e.g. in mainstream news articles.
After discussing absolute impact or even average impact per $, you still need to say something about marginal impact in order to talk about the cost-effectiveness of a donation.
I think it’s prima facie plausible that LessWrong has very diminishing marginal returns to effort or dollars, as it’s an online platform where most contributions come from users.
I am relatively agnostic/uncertain about the steepness of the diminishing marginal returns curve; ultimately I think it’s steeper than that of many other grantees, perhaps by something like 3x-10x (a very made-up number).
Some non-exhaustive factors going into my thinking here, non-exhaustive and pushing in various directions, thrown out without much explanation: a) Oli’s statements that the organization is low on slack and that staff are taking large pay cuts, b) my skepticism of some of the items in the “Things I Wish I Had Time And Funding For” section, c) some sense that thoughtful interface design can really improve online discussions, and a sense that LessWrong is very thoughtful in this area.
I don’t have a strong view on the merits of Lightcone’s other current projects. One small note I’d make is that, when assessing the cost-effectiveness of something like Lighthaven, it’s of course important to consider the actual and expected revenues as well as the costs.
In contrast to some other threads here such as Daniel Kokotajlo’s and Drake Thomas’s, on a totally personal level I don’t feel a sense of “indebtedness” to Lightcone or LessWrong, have historically felt less aligned with it in terms of “vibes,” and don’t recall having significant interactions with it at the time it would have been most helpful for me gaining context on AI safety. I share this not as a dig at Lightcone, but to provide context to my thinking above 🤷.
I work as a grantmaker on the Global Catastrophic Risks Capacity-Building team at Open Philanthropy; a large part of our funding portfolio is aimed at increasing the human capital and knowledge base directed at AI safety. I previously worked on several of Open Phil’s grants to Lightcone.
As part of my team’s work, we spend a good deal of effort forming views about which interventions have or have not been important historically for the goals described in my first paragraph. I think LessWrong and the Alignment Forum have been strongly positive for these goals historically, and think they’ll likely continue to be at least into the medium term.
Good Ventures’ decision to exit this broad space meant that Open Phil didn’t reach a decision on whether & how much to continue funding Lightcone; I’m not sure where we would have landed there. However, I do think that for many readers who resonate with Lightcone’s goals and approach to GCR x-risk work, it’s reasonable to think this is among their best donation opportunities. Below I’ll describe some of my evidence and thinking.
Surveys: The top-level post describes surveys we ran in 2020 and 2023. I think these provide good evidence that LessWrong (and the Alignment Forum) have had a lot of impact on the career trajectories & work of folks in AI safety.
The methodology behind the cost-effectiveness estimates in the top-level post broadly makes sense to me, though I’d emphasize the roughness of this kind of calculation.
In general I think one should also watch the absolute impact in addition to the cost-effectiveness calculations, since cost-effectiveness can be non-robust in cases where N is small / you have few data points (i.e. few people interacted with a given program). In this case N seems large enough that I don’t worry much about robustness.
This whole approach does not really take into account negative impacts. We did ask people about these, but: a) the respondents are selected for having been positively impacted because they’re taking our survey at all, and b) for various other reasons, I’m skeptical of this methodology capturing negative impacts well.
So I think there’s reasonable room for disagreement here, if e.g. you think something like, “yes important discussions happen here, but it would be better if they happened on some other platform for <reason>.” Discussion then becomes about the counterfactual other platform.
More methodological detail, for the curious:
These were invite-only surveys, and we aimed to invite many of the people we thought were doing the most promising work on global catastrophic risk reduction (e.g. AI safety) across many areas, and for whom important influences and trajectory-boosting effects might have happened recently.
In 2020, we got ~200 respondents; in 2023, we got ~350.
Other thoughts:
I think a “common-sense” view backs up this empirical evidence quite well: LW/AF is the main place on the public internet where in-depth discussions about e.g. AI safety research agendas happen, and increasingly I see links to articles here “in the wild” e.g. in mainstream news articles.
After discussing absolute impact or even average impact per $, you still need to say something about marginal impact in order to talk about the cost-effectiveness of a donation.
I think it’s prima facie plausible that LessWrong has very diminishing marginal returns to effort or dollars, as it’s an online platform where most contributions come from users.
I am relatively agnostic/uncertain about the steepness of the diminishing marginal returns curve; ultimately I think it’s steeper than that of many other grantees, perhaps by something like 3x-10x (a very made-up number).
Some non-exhaustive factors going into my thinking here, non-exhaustive and pushing in various directions, thrown out without much explanation: a) Oli’s statements that the organization is low on slack and that staff are taking large pay cuts, b) my skepticism of some of the items in the “Things I Wish I Had Time And Funding For” section, c) some sense that thoughtful interface design can really improve online discussions, and a sense that LessWrong is very thoughtful in this area.
I don’t have a strong view on the merits of Lightcone’s other current projects. One small note I’d make is that, when assessing the cost-effectiveness of something like Lighthaven, it’s of course important to consider the actual and expected revenues as well as the costs.
In contrast to some other threads here such as Daniel Kokotajlo’s and Drake Thomas’s, on a totally personal level I don’t feel a sense of “indebtedness” to Lightcone or LessWrong, have historically felt less aligned with it in terms of “vibes,” and don’t recall having significant interactions with it at the time it would have been most helpful for me gaining context on AI safety. I share this not as a dig at Lightcone, but to provide context to my thinking above 🤷.