I can imagine plausible mechanisms for how the first four backlash examples were a consequence of perceived power-seeking from AI safetyists, but I don’t see one for e/acc. Does someone have one?
Alternatively, what reason do I have to expect that there is a causal relationship between safetyist power-seeking and e/acc even if I can’t see one?
e/acc has coalesced in defense of open-source, partly in response to AI safety attacks on open-source. This may well lead directly to a strongly anti-AI-regulation Trump White House, since there are significant links between e/acc and MAGA.
I think of this as a massive own goal for AI safety, caused by focusing too much on trying to get short-term “wins” (e.g. dunking on open-source people) that don’t actually matter in the long term.
e/acc has coalesced in defense of open-source, partly in response to AI safety attacks on open-source. This may well lead directly to a strongly anti-AI-regulation Trump White House
IMO this overstates the influence of OS stuff on the broader e/acc movement.
My understanding is that the central e/acc philosophy is around tech progress. Something along the lines of “we want to accelerate technological progress and AGI progress as quickly as possible, because we think technology is extremely awesome and will lead to a bunch of awesome+cool outcomes.” The support for OS is toward the ultimate goal of accelerating technological progress.
In a world where AI safety folks didn’t say/do anything about OS, I would still suspect clashes between e/accs and AI safety folks. AI safety folks generally do not believe that maximally fast/rapid technological progress is good for the world. This would inevitably cause tension between the e/acc worldview and the worldview of many AI safety folks, unless AI safety folks decided never to propose any regulations that could cause us to deviate from the maximally-fast pathways to AGI. This seems quite costly.
(Separately, I agree that “dunking on open-source people” is bad and that people should do less “dunking on X” in general. I don’t really see this as an issue with prioritizing short-term wins so much as getting sucked into ingroup vs. outgroup culture wars and losing sight of one’s actual goals.)
This may well lead directly to a strongly anti-AI-regulation Trump White House
Similar point here– I think it’s extremely likely this would’ve happened anyways. A community that believes passionately in rapid or maximally-fast AGI progress already has strong motivation to fight AI regulations.
In a world where AI safety folks didn’t say/do anything about OS, I would still suspect clashes between e/accs and AI safety folks.
There’s a big difference between e/acc as a group of random twitter anons, and e/acc as an organized political force. I claim that anti-open-source sentiment from the AI safety community played a significant role (and was perhaps the single biggest driver) in the former turning into the latter. It’s much easier to form a movement when you have an enemy. As one illustrative example, I’ve seen e/acc flags that are a version of the libertarian flag saying “come and take it [our GPUs]”. These are a central example of an e/acc rallying cry that was directly triggered by AI governance proposals. And I’ve talked to several principled libertarians who are too mature to get sucked into a movement by online meme culture, but who have been swung in that direction due to shared opposition to SB-1047.
Consider, analogously: Silicon Valley has had many political disagreements with the Democrats over the last decade—e.g. left-leaning media has continuously been very hostile to Silicon Valley. But while the incentives to push back were there for a long time, the organized political will to push back has only arisen pretty recently. This shows that there’s a big difference between “in principle people disagree” and “actual political fights”.
I think it’s extremely likely this would’ve happened anyways. A community that believes passionately in rapid or maximally-fast AGI progress already has strong motivation to fight AI regulations.
This reasoning seems far too weak to support such a confident conclusion. There was a lot of latent pro-innovation energy in Silicon Valley, true, but the ideology it gets channeled towards is highly contingent. For instance, Vivek Ramaswamy is a very pro-innovation, anti-regulation candidate who has no strong views on AI. If AI safety hadn’t been such a convenient enemy then plausibly people with pro-innovation views would have channeled them towards something closer to his worldview.
Separately, do you think “organized opposition” could have ever been avoided? It sounds like you’re making two claims:
When AI safety folks advocate for specific policies, this gives opponents something to rally around and makes them more likely to organize.
There are some examples of specific policies (e.g., restrictions on OS, SB1047) that have contributed to this.
Suppose no one said anything about OS, and also (separately) SB1047 never happened. Presumably, at some point, som groups start advocating for specific policies that go against the e/acc worldview. At that point, it seems like you get the organized resistance.
So I’m curious: What does the Ideal Richard World look like? Does it mean people are just much more selective about which policies to advocate for? Under what circumstances is it OK to advocate for something that will increase the political organization of opposing groups? Are there examples of policies that you think are so important that they’re worth the cost (of giving your opposition something to rally around?) To what extent is the deeper crux the fact that you’re less optimistic about the policy proposals actually helping?
Presumably, at some point, some groups start advocating for specific policies that go against the e/acc worldview. At that point, it seems like you get the organized resistance.
My two suggestions:
People stop aiming to produce proposals that hit almost all the possible worlds. By default you should design your proposal to be useless in, say, 20% of the worlds you’re worried about (because trying to get that last 20% will create really disproportionate pushback); or design your proposal so that it leaves 20% of the work undone (because trusting that other people will do that work ends up being less power-seeking, and more robust, than trying to centralize everything under your plan). I often hear people saying stuff like “we need to ensure that things go well” or “this plan needs to be sufficient to prevent risk”, and I think that mindset is basically guaranteed to push you too far towards the power-seeking end of the spectrum. (I’ve added an edit to the end of the post explaining this.)
As a specific example of this, if your median doom scenario goes through AGI developed/deployed by centralized powers (e.g. big labs, govts) I claim you should basically ignore open-source. Sure, there are some tail worlds where a random hacker collective beats the big players to build AGI; or where the big players stop in a responsible way, but the open-source community doesn’t; etc. But designing proposals around those is like trying to put out candles when your house is on fire. And I expect there to be widespread appetite for regulating AI labs from govts, wider society, and even labs themselves, within a few years’ time, unless those proposals become toxic in the meantime—and making those proposals a referendum on open-source is one of the best ways I can imagine to make them toxic.
(I’ve talked to some people whose median doom scenario looks more like Hendrycks’ “natural selection” paper. I think it makes sense by those people’s lights to continue strongly opposing open-source, but I also think those people are wrong.)
I think that the “we must ensure” stuff is mostly driven by a kind of internal alarm bell rather than careful cost-benefit reasoning; and in general I often expect this type of motivation to backfire in all sorts of ways.
Why do you assume that open source equates to small hacker groups??? The largest supplier of open weights is Meta AI and their recent Llama-405B rivals SOTA models.
I think your concrete suggestions such as these are very good. I still don’t think you have illustrated the power-seeking aspect you are claiming very well (it seems to be there for EA, but less so for AI safety in general).
In short, I think you are conveying certain important, substantive points, but are choosing a poor framing.
Thanks for this clarification– I understand your claim better now.
Do you have any more examples of evidence that suggests that AI safety caused (or contributed meaningfully to) this shift from “online meme culture” to “organized political force?” This seems like the biggest crux imo.
No legible evidence jumps to mind, but I’ll keep an eye out. Inherently this sort of thing is pretty hard to pin down, but I do think I’m one of the handful of people that most strongly bridges the AI safety and accelerationist communities on a social level, and so I get a lot of illegible impressions.
Do you see this as likely to have been avoidable? How? I agree that it’s undesirable. Less clear to me that it’s an “own goal”.
Do you see other specific things we’re doing now (or that we may soon do) that seem likely to be future-own-goals?
[all of the below is “this is how it appears to my non-expert eyes”; I’ve never studied such dynamics, so perhaps I’m missing important factors]
I expect that, even early on, e/acc actively looked for sources of long-term disagreement with AI safety advocates, so it doesn’t seem likely to me that [AI safety people don’t emphasize this so much] would have much of an impact. I expect that anything less than a position of [open-source will be fine forever] would have had much the same impact—though perhaps a little slower. (granted, there’s potential for hindsight bias here, so I shouldn’t say “I’m confident that this was inevitable”, but it’s not at all clear to me that it wasn’t highly likely)
It’s also not clear to me that any narrow definition of [AI safety community] was in a position to prevent some claims that open-source will be unacceptably dangerous at some point. E.g. IIRC Geoffrey Hinton rhetorically compared it to giving everyone nukes quite a while ago.
Reducing focus on [desirable, but controversial, short-term wins] seems important to consider where non-adversarial groups are concerned. It’s less clear that it helps against (proto-)adversarial groups—unless you’re proposing some kind of widespread, strict message discipline (I assume that you’re not).
I agree that dunking on OS communities has apparently not been helpful in these regards. It seems kind of orthogonal to being power-seeking though.
Overall, I think part of the issue with AI safety is that the established actors (e.g. wide parts of CS academia) have opted out of taking a responsible stance, e.g. compared to recent developments in biosciences and RNA editing. Partially, one could blame this on them not wanting to identify too closely with, or grant legitimacy to, the existing AI safety community at the time. However, a priori, it seems more likely that it is simply due to the different culture in CS vs life sciences, with the former lacking the deep culture of responsibility for their research (in particular as far as they’re connected to e.g. Silicon Valley startup culture).
I can imagine plausible mechanisms for how the first four backlash examples were a consequence of perceived power-seeking from AI safetyists, but I don’t see one for e/acc. Does someone have one?
Alternatively, what reason do I have to expect that there is a causal relationship between safetyist power-seeking and e/acc even if I can’t see one?
e/acc has coalesced in defense of open-source, partly in response to AI safety attacks on open-source. This may well lead directly to a strongly anti-AI-regulation Trump White House, since there are significant links between e/acc and MAGA.
I think of this as a massive own goal for AI safety, caused by focusing too much on trying to get short-term “wins” (e.g. dunking on open-source people) that don’t actually matter in the long term.
IMO this overstates the influence of OS stuff on the broader e/acc movement.
My understanding is that the central e/acc philosophy is around tech progress. Something along the lines of “we want to accelerate technological progress and AGI progress as quickly as possible, because we think technology is extremely awesome and will lead to a bunch of awesome+cool outcomes.” The support for OS is toward the ultimate goal of accelerating technological progress.
In a world where AI safety folks didn’t say/do anything about OS, I would still suspect clashes between e/accs and AI safety folks. AI safety folks generally do not believe that maximally fast/rapid technological progress is good for the world. This would inevitably cause tension between the e/acc worldview and the worldview of many AI safety folks, unless AI safety folks decided never to propose any regulations that could cause us to deviate from the maximally-fast pathways to AGI. This seems quite costly.
(Separately, I agree that “dunking on open-source people” is bad and that people should do less “dunking on X” in general. I don’t really see this as an issue with prioritizing short-term wins so much as getting sucked into ingroup vs. outgroup culture wars and losing sight of one’s actual goals.)
Similar point here– I think it’s extremely likely this would’ve happened anyways. A community that believes passionately in rapid or maximally-fast AGI progress already has strong motivation to fight AI regulations.
There’s a big difference between e/acc as a group of random twitter anons, and e/acc as an organized political force. I claim that anti-open-source sentiment from the AI safety community played a significant role (and was perhaps the single biggest driver) in the former turning into the latter. It’s much easier to form a movement when you have an enemy. As one illustrative example, I’ve seen e/acc flags that are a version of the libertarian flag saying “come and take it [our GPUs]”. These are a central example of an e/acc rallying cry that was directly triggered by AI governance proposals. And I’ve talked to several principled libertarians who are too mature to get sucked into a movement by online meme culture, but who have been swung in that direction due to shared opposition to SB-1047.
Consider, analogously: Silicon Valley has had many political disagreements with the Democrats over the last decade—e.g. left-leaning media has continuously been very hostile to Silicon Valley. But while the incentives to push back were there for a long time, the organized political will to push back has only arisen pretty recently. This shows that there’s a big difference between “in principle people disagree” and “actual political fights”.
This reasoning seems far too weak to support such a confident conclusion. There was a lot of latent pro-innovation energy in Silicon Valley, true, but the ideology it gets channeled towards is highly contingent. For instance, Vivek Ramaswamy is a very pro-innovation, anti-regulation candidate who has no strong views on AI. If AI safety hadn’t been such a convenient enemy then plausibly people with pro-innovation views would have channeled them towards something closer to his worldview.
Separately, do you think “organized opposition” could have ever been avoided? It sounds like you’re making two claims:
When AI safety folks advocate for specific policies, this gives opponents something to rally around and makes them more likely to organize.
There are some examples of specific policies (e.g., restrictions on OS, SB1047) that have contributed to this.
Suppose no one said anything about OS, and also (separately) SB1047 never happened. Presumably, at some point, som groups start advocating for specific policies that go against the e/acc worldview. At that point, it seems like you get the organized resistance.
So I’m curious: What does the Ideal Richard World look like? Does it mean people are just much more selective about which policies to advocate for? Under what circumstances is it OK to advocate for something that will increase the political organization of opposing groups? Are there examples of policies that you think are so important that they’re worth the cost (of giving your opposition something to rally around?) To what extent is the deeper crux the fact that you’re less optimistic about the policy proposals actually helping?
My two suggestions:
People stop aiming to produce proposals that hit almost all the possible worlds. By default you should design your proposal to be useless in, say, 20% of the worlds you’re worried about (because trying to get that last 20% will create really disproportionate pushback); or design your proposal so that it leaves 20% of the work undone (because trusting that other people will do that work ends up being less power-seeking, and more robust, than trying to centralize everything under your plan). I often hear people saying stuff like “we need to ensure that things go well” or “this plan needs to be sufficient to prevent risk”, and I think that mindset is basically guaranteed to push you too far towards the power-seeking end of the spectrum. (I’ve added an edit to the end of the post explaining this.)
As a specific example of this, if your median doom scenario goes through AGI developed/deployed by centralized powers (e.g. big labs, govts) I claim you should basically ignore open-source. Sure, there are some tail worlds where a random hacker collective beats the big players to build AGI; or where the big players stop in a responsible way, but the open-source community doesn’t; etc. But designing proposals around those is like trying to put out candles when your house is on fire. And I expect there to be widespread appetite for regulating AI labs from govts, wider society, and even labs themselves, within a few years’ time, unless those proposals become toxic in the meantime—and making those proposals a referendum on open-source is one of the best ways I can imagine to make them toxic.
(I’ve talked to some people whose median doom scenario looks more like Hendrycks’ “natural selection” paper. I think it makes sense by those people’s lights to continue strongly opposing open-source, but I also think those people are wrong.)
I think that the “we must ensure” stuff is mostly driven by a kind of internal alarm bell rather than careful cost-benefit reasoning; and in general I often expect this type of motivation to backfire in all sorts of ways.
Why do you assume that open source equates to small hacker groups??? The largest supplier of open weights is Meta AI and their recent Llama-405B rivals SOTA models.
I think your concrete suggestions such as these are very good. I still don’t think you have illustrated the power-seeking aspect you are claiming very well (it seems to be there for EA, but less so for AI safety in general).
In short, I think you are conveying certain important, substantive points, but are choosing a poor framing.
Thanks for this clarification– I understand your claim better now.
Do you have any more examples of evidence that suggests that AI safety caused (or contributed meaningfully to) this shift from “online meme culture” to “organized political force?” This seems like the biggest crux imo.
No legible evidence jumps to mind, but I’ll keep an eye out. Inherently this sort of thing is pretty hard to pin down, but I do think I’m one of the handful of people that most strongly bridges the AI safety and accelerationist communities on a social level, and so I get a lot of illegible impressions.
Do you see this as likely to have been avoidable? How?
I agree that it’s undesirable. Less clear to me that it’s an “own goal”.
Do you see other specific things we’re doing now (or that we may soon do) that seem likely to be future-own-goals?
[all of the below is “this is how it appears to my non-expert eyes”; I’ve never studied such dynamics, so perhaps I’m missing important factors]
I expect that, even early on, e/acc actively looked for sources of long-term disagreement with AI safety advocates, so it doesn’t seem likely to me that [AI safety people don’t emphasize this so much] would have much of an impact.
I expect that anything less than a position of [open-source will be fine forever] would have had much the same impact—though perhaps a little slower. (granted, there’s potential for hindsight bias here, so I shouldn’t say “I’m confident that this was inevitable”, but it’s not at all clear to me that it wasn’t highly likely)
It’s also not clear to me that any narrow definition of [AI safety community] was in a position to prevent some claims that open-source will be unacceptably dangerous at some point. E.g. IIRC Geoffrey Hinton rhetorically compared it to giving everyone nukes quite a while ago.
Reducing focus on [desirable, but controversial, short-term wins] seems important to consider where non-adversarial groups are concerned. It’s less clear that it helps against (proto-)adversarial groups—unless you’re proposing some kind of widespread, strict message discipline (I assume that you’re not).
[EDIT for useful replies to this, see Richard’s replies to Akash above]
I agree that dunking on OS communities has apparently not been helpful in these regards. It seems kind of orthogonal to being power-seeking though. Overall, I think part of the issue with AI safety is that the established actors (e.g. wide parts of CS academia) have opted out of taking a responsible stance, e.g. compared to recent developments in biosciences and RNA editing. Partially, one could blame this on them not wanting to identify too closely with, or grant legitimacy to, the existing AI safety community at the time. However, a priori, it seems more likely that it is simply due to the different culture in CS vs life sciences, with the former lacking the deep culture of responsibility for their research (in particular as far as they’re connected to e.g. Silicon Valley startup culture).