Has EA invested much into banning gain-of-function research? I’ve heard about Alvea and 1DaySooner, but no EA projects aimed at gain-of-function. Perhaps the relevant efforts aren’t publicly known, but I wouldn’t be shocked if more person-hours have been invested in EA community building in the past two years (for example) than banning gain-of-function research.
Has EA invested much into banning gain-of-function research?
If it hasn’t, shouldn’t that negatively update us on how EA policy investment for AI will go?
[In the sense that this seems like a slam dunk policy to me from where I sit, and if the policy landscape is such that it and things like it are not worth trying, then probably policy can’t deliver the wins we need in the much harder AI space.]
An earlier comment seems to make a good case that there’s already more community investment in AI policy, and another earlier thread points out that the content in brackets doesn’t seem to involve a good model of policy tractability.
There was already a moratorium on funding GoF research in 2014 after an uproar in 2011, which was not renewed when it expired. There was a Senate bill in 2021 to make the moratorium permanent (and, I think, more far-reaching, in that institutions that did any such research were ineligible for federal funding, i.e. much more like a ban on doing it at all than simply deciding not to fund those projects) that, as far as I can tell, stalled out. I don’t think this policy ask was anywhere near as crazy as the AI policy asks that we would need to make the AGI transition survivable!
It sounds like you’re arguing “look, if your sense of easy and hard is miscalibrated, you can’t reason by saying ‘if they can’t do easy things, then they can’t do hard things’,” which seems like a reasonable criticism on logical grounds but not probabilistic ones. Surely not being able to do things that seem easy is evidence that one’s not able to do things that seem hard?
Has EA invested much into banning gain-of-function research? I’ve heard about Alvea and 1DaySooner, but no EA projects aimed at gain-of-function. Perhaps the relevant efforts aren’t publicly known, but I wouldn’t be shocked if more person-hours have been invested in EA community building in the past two years (for example) than banning gain-of-function research.
If it hasn’t, shouldn’t that negatively update us on how EA policy investment for AI will go?
[In the sense that this seems like a slam dunk policy to me from where I sit, and if the policy landscape is such that it and things like it are not worth trying, then probably policy can’t deliver the wins we need in the much harder AI space.]
An earlier comment seems to make a good case that there’s already more community investment in AI policy, and another earlier thread points out that the content in brackets doesn’t seem to involve a good model of policy tractability.
There was already a moratorium on funding GoF research in 2014 after an uproar in 2011, which was not renewed when it expired. There was a Senate bill in 2021 to make the moratorium permanent (and, I think, more far-reaching, in that institutions that did any such research were ineligible for federal funding, i.e. much more like a ban on doing it at all than simply deciding not to fund those projects) that, as far as I can tell, stalled out. I don’t think this policy ask was anywhere near as crazy as the AI policy asks that we would need to make the AGI transition survivable!
It sounds like you’re arguing “look, if your sense of easy and hard is miscalibrated, you can’t reason by saying ‘if they can’t do easy things, then they can’t do hard things’,” which seems like a reasonable criticism on logical grounds but not probabilistic ones. Surely not being able to do things that seem easy is evidence that one’s not able to do things that seem hard?
I agree it’s some evidence, but that’s a much weaker claim than “probably policy can’t deliver the wins we need.”