I’ll have to think about this more, but if you compare this to other things like climate change or gain of function research, it’s a very strange movement indeed that doesn’t want to spread awareness. Note that I agree that regulation is probably more likely to be harmful than helpful, and also note that I personally despise political activists, so I’m not someone normally amenable to “consciousness raising.” It’s just unusual to be against it for something you care about.
Absolutely—but it’s a strange situation in many respects.
It may be that spreading awareness is positive, but I don’t think standard arguments translate directly. There’s also irreversibility to consider: err on the side of not spreading info, and you can spread it later (so long as there’s time); you can’t easily unspread it.
More generally, I think for most movements we should ask ourselves, “How much worse than the status-quo can things plausibly get?”.
For gain-of-function research, we’d need to consider outcomes where the debate gets huge focus, but the sensible side loses (e.g. through the public seeing gof as the only way to prevent future pandemics). This seems unlikely, since I think there are good common-sense arguments against gof at most levels of detail.
For climate change, it’s less clear to me: there seem many plausible ways for things to have gotten worse. Essentially because the only clear conclusion is “something must be done”, but there’s quite a bit less clarity about what—or at least there should be less clarity. (e.g. to the extent that direct climate-positive actions have negative economic consequences, to what extent are there downstream negative-climate impacts; I have no idea, but I’m sure it’s a complex situation)
For AGI, I find it easy to imagine making-things-worse and hard to see plausible routes to making-things-better.
Even the expand-the-field upside needs to be approached with caution. This might be better thought of as something like [expand the field while maintaining/improving the average level of understanding]. Currently, most people who bump into AI safety/alignment will quickly find sources discussing the most important problems. If we expanded the field 100x overnight, then it becomes plausible that most new people don’t focus on the real problems. (e.g. it’s easy enough only to notice the outer-alignment side of things)
Unless time is very short, I’d expect doubling the field each year works out better than 5x each year—because all else would not be equal. (I have no good sense what the best expansion rate or mechanism is—just that it’s not [expand as fast as possible])
But perhaps I’m conflating [aware of the problem] with [actively working on the problem] a bit much. Might not be a bad idea to have large amounts of smart people aware of the problem overnight.
I’ll have to think about this more, but if you compare this to other things like climate change or gain of function research, it’s a very strange movement indeed that doesn’t want to spread awareness. Note that I agree that regulation is probably more likely to be harmful than helpful, and also note that I personally despise political activists, so I’m not someone normally amenable to “consciousness raising.” It’s just unusual to be against it for something you care about.
Absolutely—but it’s a strange situation in many respects.
It may be that spreading awareness is positive, but I don’t think standard arguments translate directly. There’s also irreversibility to consider: err on the side of not spreading info, and you can spread it later (so long as there’s time); you can’t easily unspread it.
More generally, I think for most movements we should ask ourselves, “How much worse than the status-quo can things plausibly get?”.
For gain-of-function research, we’d need to consider outcomes where the debate gets huge focus, but the sensible side loses (e.g. through the public seeing gof as the only way to prevent future pandemics). This seems unlikely, since I think there are good common-sense arguments against gof at most levels of detail.
For climate change, it’s less clear to me: there seem many plausible ways for things to have gotten worse. Essentially because the only clear conclusion is “something must be done”, but there’s quite a bit less clarity about what—or at least there should be less clarity. (e.g. to the extent that direct climate-positive actions have negative economic consequences, to what extent are there downstream negative-climate impacts; I have no idea, but I’m sure it’s a complex situation)
For AGI, I find it easy to imagine making-things-worse and hard to see plausible routes to making-things-better.
Even the expand-the-field upside needs to be approached with caution. This might be better thought of as something like [expand the field while maintaining/improving the average level of understanding]. Currently, most people who bump into AI safety/alignment will quickly find sources discussing the most important problems. If we expanded the field 100x overnight, then it becomes plausible that most new people don’t focus on the real problems. (e.g. it’s easy enough only to notice the outer-alignment side of things)
Unless time is very short, I’d expect doubling the field each year works out better than 5x each year—because all else would not be equal. (I have no good sense what the best expansion rate or mechanism is—just that it’s not [expand as fast as possible])
But perhaps I’m conflating [aware of the problem] with [actively working on the problem] a bit much. Might not be a bad idea to have large amounts of smart people aware of the problem overnight.