So I think I disagree with an appeal to authority. My constant reliance on Sears (2023) is not because Sears is an authority, but because I think its a good bit of work. I’ve tried summarising it here, but the reason I don’t lay out the entire argument is it is based on 3 detailed case studies and a number of less detailed case studies; if I’d tried to lay them out here, the post would have been far too long. I hope people read the underlying literature that I base my argument on—I buy them because the arguments are compelling, not because of authority.
I think looking for analogies through history, and what strategies have led to success and failure, is a very useful, albeit limited, approach. Aschenbrenner also seems to as well. I don’t fully argue why I think we could get humanity securitisation, but my various arguments can be summarised with the two following points:
National securitisation, whilst often the ‘default’ is by no means inevitable. Humanity securitisation can win out, and also it is possible AGI is never truly securitised (both of these are probably safer paths)
The reason national securitisation wins out is due to it ‘winning’ in a struggle of narratives. Expert epistemic communities, like the AGI Safety community, can play a role in this, as could other forms of political work as well.
The ‘Definitionaly’ point is sort of poor writing from me. The definition of macrosecuritisation failure includes within it ‘failure to combat the existential threat’. So if I can prove it leads to macrosecuritisation failure, that means our ability to deal with the threat is reduced; if our ability to deal with the threat is not reduced, then it would not be macrosecuritisation failure. So the point one would contest is whether the national securitisation winning out causes macrosecuritisation failure, as that incorporates the dangerous outcomes in its definition—however, I do agree, I worded this poorly. I actually do think this definition is quite slippery, and I can think of scenarios that macrosecuritisation fails but you still get effective action, but this is somewhat besides the point.
I am also somewhat pessimistic about a Trump II administration for macrosecuritisation and pausing. But this doesn’t mean that I think Aschenbrenner’s viewpoint is the ‘best of the rest’ - its amongst the worst. As I meant in the piece, national securitisation has some real significant dangers that have played out many times, and would undermine the non-securitised governance efforts so far, so its not clear to me why we ought to support it if macrosecuritisation won’t work. Aschenbrenner’s model of AI governance seems more dangerous than these other strategies, and there are other things Republicans care about beyond national security, so its not obvious to me why this is where they should go. The track record of national security has been (as shown) very poor, so I don’t know why pessimism around macrosecuritisation should make you endorse it.
The definition of macrosecuritisation failure includes within it ‘failure to combat the existential threat’.
This seems like a poorly chosen definition that’s simply going to confuse any discussion of the issue.
The track record of national security has been (as shown) very poor, so I don’t know why pessimism around macrosecuritisation should make you endorse it.
If neither macrosecuritisation or a pause a likely to occur, what’s the alternative if not Aschenbrenner?
(To clarify, I’m suggesting outreach to the national security folks, not necessarily an AI Manhattan project, but I’m expecting the former to more or less inevitably lead to the latter).
So I think I disagree with an appeal to authority. My constant reliance on Sears (2023) is not because Sears is an authority, but because I think its a good bit of work. I’ve tried summarising it here, but the reason I don’t lay out the entire argument is it is based on 3 detailed case studies and a number of less detailed case studies; if I’d tried to lay them out here, the post would have been far too long. I hope people read the underlying literature that I base my argument on—I buy them because the arguments are compelling, not because of authority.
I think looking for analogies through history, and what strategies have led to success and failure, is a very useful, albeit limited, approach. Aschenbrenner also seems to as well. I don’t fully argue why I think we could get humanity securitisation, but my various arguments can be summarised with the two following points:
National securitisation, whilst often the ‘default’ is by no means inevitable. Humanity securitisation can win out, and also it is possible AGI is never truly securitised (both of these are probably safer paths)
The reason national securitisation wins out is due to it ‘winning’ in a struggle of narratives. Expert epistemic communities, like the AGI Safety community, can play a role in this, as could other forms of political work as well.
The ‘Definitionaly’ point is sort of poor writing from me. The definition of macrosecuritisation failure includes within it ‘failure to combat the existential threat’. So if I can prove it leads to macrosecuritisation failure, that means our ability to deal with the threat is reduced; if our ability to deal with the threat is not reduced, then it would not be macrosecuritisation failure. So the point one would contest is whether the national securitisation winning out causes macrosecuritisation failure, as that incorporates the dangerous outcomes in its definition—however, I do agree, I worded this poorly. I actually do think this definition is quite slippery, and I can think of scenarios that macrosecuritisation fails but you still get effective action, but this is somewhat besides the point.
I am also somewhat pessimistic about a Trump II administration for macrosecuritisation and pausing. But this doesn’t mean that I think Aschenbrenner’s viewpoint is the ‘best of the rest’ - its amongst the worst. As I meant in the piece, national securitisation has some real significant dangers that have played out many times, and would undermine the non-securitised governance efforts so far, so its not clear to me why we ought to support it if macrosecuritisation won’t work. Aschenbrenner’s model of AI governance seems more dangerous than these other strategies, and there are other things Republicans care about beyond national security, so its not obvious to me why this is where they should go. The track record of national security has been (as shown) very poor, so I don’t know why pessimism around macrosecuritisation should make you endorse it.
This seems like a poorly chosen definition that’s simply going to confuse any discussion of the issue.
If neither macrosecuritisation or a pause a likely to occur, what’s the alternative if not Aschenbrenner?
(To clarify, I’m suggesting outreach to the national security folks, not necessarily an AI Manhattan project, but I’m expecting the former to more or less inevitably lead to the latter).