Do you see this as likely to have been avoidable? How? I agree that it’s undesirable. Less clear to me that it’s an “own goal”.
Do you see other specific things we’re doing now (or that we may soon do) that seem likely to be future-own-goals?
[all of the below is “this is how it appears to my non-expert eyes”; I’ve never studied such dynamics, so perhaps I’m missing important factors]
I expect that, even early on, e/acc actively looked for sources of long-term disagreement with AI safety advocates, so it doesn’t seem likely to me that [AI safety people don’t emphasize this so much] would have much of an impact. I expect that anything less than a position of [open-source will be fine forever] would have had much the same impact—though perhaps a little slower. (granted, there’s potential for hindsight bias here, so I shouldn’t say “I’m confident that this was inevitable”, but it’s not at all clear to me that it wasn’t highly likely)
It’s also not clear to me that any narrow definition of [AI safety community] was in a position to prevent some claims that open-source will be unacceptably dangerous at some point. E.g. IIRC Geoffrey Hinton rhetorically compared it to giving everyone nukes quite a while ago.
Reducing focus on [desirable, but controversial, short-term wins] seems important to consider where non-adversarial groups are concerned. It’s less clear that it helps against (proto-)adversarial groups—unless you’re proposing some kind of widespread, strict message discipline (I assume that you’re not).
Do you see this as likely to have been avoidable? How?
I agree that it’s undesirable. Less clear to me that it’s an “own goal”.
Do you see other specific things we’re doing now (or that we may soon do) that seem likely to be future-own-goals?
[all of the below is “this is how it appears to my non-expert eyes”; I’ve never studied such dynamics, so perhaps I’m missing important factors]
I expect that, even early on, e/acc actively looked for sources of long-term disagreement with AI safety advocates, so it doesn’t seem likely to me that [AI safety people don’t emphasize this so much] would have much of an impact.
I expect that anything less than a position of [open-source will be fine forever] would have had much the same impact—though perhaps a little slower. (granted, there’s potential for hindsight bias here, so I shouldn’t say “I’m confident that this was inevitable”, but it’s not at all clear to me that it wasn’t highly likely)
It’s also not clear to me that any narrow definition of [AI safety community] was in a position to prevent some claims that open-source will be unacceptably dangerous at some point. E.g. IIRC Geoffrey Hinton rhetorically compared it to giving everyone nukes quite a while ago.
Reducing focus on [desirable, but controversial, short-term wins] seems important to consider where non-adversarial groups are concerned. It’s less clear that it helps against (proto-)adversarial groups—unless you’re proposing some kind of widespread, strict message discipline (I assume that you’re not).
[EDIT for useful replies to this, see Richard’s replies to Akash above]