Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?
Also, what do you make of the author’s comment below?
In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).
This is a comprehensive, nuanced, and well-written post. A few questions:
How likely do you think it is that, under a Harris administration, AI labs will successfully lobby Democrats to kill safety-oriented policies, as happened with SB 1047 on the state level? Even if Harris is on net better than Trump this could greatly reduce the expected value of her presidency from an x-risk perspective.
Related to the above, is it fair to say that under either party, there will need to be advocacy/lobbying for safety-focused policies on AI? If so, how do you make tradeoffs between this and the election? i.e. if someone has $x to donate, what percentage should they give to the election vs. other AI safety causes?
How much of your assessment of the difference in AI risk between Harris and Trump is due to the concrete AI policies you expect each of them to push, vs. how much is due to differences in competence and respect for democracy?
I can’t find much information about the Movement Labs quiz and how it helps Harris win. Could you elaborate, privately if needed? If the quiz is simply matching voters with the candidate who best matches their values, is it because it will be distributed to voters who lean Democrat, or does its effectiveness come through a different path?