Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?
Also, what do you make of the author’s comment below?
In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).
Some people are more concerned about S-risk than extinction risk, and I certainly don’t want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.
the public seems pretty bought-in on AI risk being a real issue and is interested in regulation.
There’s a huge gulf between people’s expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people (including the “safety” people) whose careers would be set back by many years if they had to find employment outside the AI field and the people who’ve invested a few hundred billion into AI are a powerful lobbying group in opposition to the members of the general public who tell pollsters they are concerned.
I don’t actually know enough about the authoritarian countries (e.g., Russia, China, Iran) to predict with any confidence how likely they are to prevent their populations from contributing to human extinction through AI. I can’t help but notice though that so far the US and the UK have done the most to advance the AI project. Also, the government’s deciding to shut down movements and technological trends is much more normalized and accepted in Russia, China and Iran than it is in the West, particularly in the US.
I don’t have any prescriptions really. I just think that the OP (titled “why the 2024 election matters, the AI risk case for Harris, & what you can do to help”, currently standing at 23 points) is badly thought out and badly reasoned, and I wish I had called for readers to downvote it because it encourages people to see everything through the Dem-v-Rep lens (even AI extinction risk, whose causal dependence on the election we don’t actually know) without contributing anything significant.
Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?
Also, what do you make of the author’s comment below?
Some people are more concerned about S-risk than extinction risk, and I certainly don’t want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.
There’s a huge gulf between people’s expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people (including the “safety” people) whose careers would be set back by many years if they had to find employment outside the AI field and the people who’ve invested a few hundred billion into AI are a powerful lobbying group in opposition to the members of the general public who tell pollsters they are concerned.
I don’t actually know enough about the authoritarian countries (e.g., Russia, China, Iran) to predict with any confidence how likely they are to prevent their populations from contributing to human extinction through AI. I can’t help but notice though that so far the US and the UK have done the most to advance the AI project. Also, the government’s deciding to shut down movements and technological trends is much more normalized and accepted in Russia, China and Iran than it is in the West, particularly in the US.
I don’t have any prescriptions really. I just think that the OP (titled “why the 2024 election matters, the AI risk case for Harris, & what you can do to help”, currently standing at 23 points) is badly thought out and badly reasoned, and I wish I had called for readers to downvote it because it encourages people to see everything through the Dem-v-Rep lens (even AI extinction risk, whose causal dependence on the election we don’t actually know) without contributing anything significant.