You spend a whole section on the health of US democracy. Do you think if US democracy gets worse, then risks from AI get bigger?
It would seem to me that if the US system gets more autocratic, then it becomes slightly easier to slow down the AI juggernaut because fewer people would need to be convinced that the juggernaut is too dangerous to be allowed to continue.
Compare with climate change: the main reason high taxes on gasoline haven’t been imposed in the US like they have in Europe is that US lawmakers have been more afraid of getting voted out of office by voters angry that it cost them more to drive their big pickup trucks and SUVs than their counterparts in Europe have been. “Less democracy” in the US would’ve resulted in a more robust response to climate change! I don’t see anything about the AI situation that makes me expect a different outcome there: i.e., I expect “robust democracy” to interfere with a robust response to the AI juggernaut, especially in a few years when it becomes clear to most people just how useful AI-based products and services can be.
Another related argument is that elites don’t want themselves and their children to be killed by the AI juggernaut any more than the masses do: its not like castles (1000 years ago) or taxes on luxury goods where the interests of the elites are fundamentally opposed to the interests of the masses. Elites (being smarter) are easier to explain the danger to than the masses are, so the more control we can give the elites relative to the masses, the better our chances of surviving the AI juggernaut, it seems to me. But IMHO we’ve strayed too far from the original topic of Harris vs Trump, and one sign that we’ve strayed too far is that IMHO Harris’s winning would strengthen US elites a little more than a Trump win would.
Along with direct harms, a single war relevant to US interests could absorb much of the nation’s political attention and vast material resources for months or years. This is particularly dangerous during times as technologically critical as ours
I agree that a war would absorb much of the nation’s political attention, but I believe that that effect would be more than cancelled out by how much easier it would become to pass new laws or new regulations. To give an example, the railroads were in 1914 as central to the US economy as the Internet is today—or close to it—and Washington nationalized the railroads during WWI, something that simply would not have been politically possible during peacetime.
You write that a war would consume “vast material resources for months or years”. Please explain what good material resources do, e.g., in the hands of governments, to help stop or render safe the AI juggernaut? It seems to me that if we could somehow reduce worldwide material-resource availability by a factor of 5 or even 10, our chances of surviving AI get much better: resources would need to be focused on maintaining the basic infrastructure keeping people safe and alive (e.g., mechanized farming, police forces) with the result that there wouldn’t be any resources left over to do huge AI training runs or to keep on fabbing ever more efficient GPUs.
I hope I am not being perceived as an intrinsically authoritarian person who has seized on the danger from AI as an opportunity to advocate for his favored policy of authoritarianism. As soon as the danger from AI is passed, I will go right back to being what in the US is called a moderate libertarian. But I can’t help but notice that we would be a lot more likely to survive the AI juggernaut if all of the world’s governments were as authoritarian as for example the Kremlin is. That’s just the logic of the situation. AI is a truly revolutionary technology. Americans (and Western Europeans to a slightly lesser extent) are comfortable with revolutionary changes; Russia and China much less so. In fact, there is a good chance that as soon as Moscow and Beijing are assured that they will have “enough access to AI” to create a truly effective system of surveilling their own populations, they’ll lose interest in AI as long as they don’t think they need to continue to invest in it in order to stay competitive militarily and economically with the West. AI is capable of transforming society in rapid, powerful, revolutionary ways—which means that all other things being equal, Beijing (who main goal is to avoid anything that might be described as a revolution) and Moscow will tend to want to supress it as much as practical.
The kind of control and power Moscow and Beijing (and Tehran) have over their respective populations is highly useful for stopping those populations from contributing to the AI juggernaut. In contrast, American democracy and the American commitment to liberty makes the US relatively bad at using political power stopping some project or activity being done by its population. (And the US government was specifically designed by the Founding Fathers to make it a lot of hard work to impose any curb or control on the freedom of the American people). America’s freedom, particularly economic and intellectual freedom, in contrast, is highly helpful to the Enemy, namely, those working to make AI more powerful. If only more of the world’s countries were like Russia, China and Iran!
I used to be a huge admirer of the US Founding Fathers. Now that I know how dangerous the AI juggernaut is, I wish that Thomas Jefferson had choked on a chicken bone and died before he had the chance to exert any influence on the form of any government! (In the unlikely event that the danger from AI is successfully circumnavigated in my lifetime, I will probably go right back to being an admirer of Thomas Jefferson.) It seemed like a great idea at the time, but now that we know how dangerous AI is, we can see in retrospect that it was a bad idea and that the architects of the governmental systems of Russia, China and Iran were in a very real sense “more correct”: those choices of governmental architectures make it easier for humanity to survive the AI gotcha (which was completely hidden from any possibility of human perception at the time those governmental architectural decisions were made, but still, right is right).
I feel that the people who recognize the AI juggernaut for the potent danger that it is are compartmentalizing their awareness of the danger in a regrettable way. Maybe a little exercise would be helpful. Do you admire the inventors of the transistor? Still? I used to, but no longer do. If William Shockley had slipped on a banana peel and hit his head on something sharp and died before he had a chance to assist in the invention of the transistor, that would have been a good thing, I now believe, because the invention of the transistor would have been delayed—by an expected 5 years in my estimation—giving humanity more time to become collectively wiser before it must confront the great danger of AI. Of course, we cannot hold Shockley morally responsible because there is no way he could have known about the AI danger. But still, if your awareness of the potency of the danger from AI doesn’t cause you to radically re-evaluate the goodness or badness of the invention of the transistor, then you’re showing a regrettable lapse in rationality IMHO. Ditto most advances in computing. The Soviets distrusted information technology. The Soviets were right—probably for the wrong reason, but right is still right, and no one who recognizes AI for the potent danger it is should continue to use the Soviet distrust of info tech as a point against them.
(This comment is addressed to those readers who consider AI to be so dangerous as to make AI risk the primary consideration in this conversation. I say that to account for the possbility that the OP cares mainly about more mundane political concerns and brought up AI safety because he (wrongly IMHO) believes it will help him make his argument.)
Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?
Also, what do you make of the author’s comment below?
In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear).
Some people are more concerned about S-risk than extinction risk, and I certainly don’t want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.
the public seems pretty bought-in on AI risk being a real issue and is interested in regulation.
There’s a huge gulf between people’s expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people (including the “safety” people) whose careers would be set back by many years if they had to find employment outside the AI field and the people who’ve invested a few hundred billion into AI are a powerful lobbying group in opposition to the members of the general public who tell pollsters they are concerned.
I don’t actually know enough about the authoritarian countries (e.g., Russia, China, Iran) to predict with any confidence how likely they are to prevent their populations from contributing to human extinction through AI. I can’t help but notice though that so far the US and the UK have done the most to advance the AI project. Also, the government’s deciding to shut down movements and technological trends is much more normalized and accepted in Russia, China and Iran than it is in the West, particularly in the US.
I don’t have any prescriptions really. I just think that the OP (titled “why the 2024 election matters, the AI risk case for Harris, & what you can do to help”, currently standing at 23 points) is badly thought out and badly reasoned, and I wish I had called for readers to downvote it because it encourages people to see everything through the Dem-v-Rep lens (even AI extinction risk, whose causal dependence on the election we don’t actually know) without contributing anything significant.
You spend a whole section on the health of US democracy. Do you think if US democracy gets worse, then risks from AI get bigger?
It would seem to me that if the US system gets more autocratic, then it becomes slightly easier to slow down the AI juggernaut because fewer people would need to be convinced that the juggernaut is too dangerous to be allowed to continue.
Compare with climate change: the main reason high taxes on gasoline haven’t been imposed in the US like they have in Europe is that US lawmakers have been more afraid of getting voted out of office by voters angry that it cost them more to drive their big pickup trucks and SUVs than their counterparts in Europe have been. “Less democracy” in the US would’ve resulted in a more robust response to climate change! I don’t see anything about the AI situation that makes me expect a different outcome there: i.e., I expect “robust democracy” to interfere with a robust response to the AI juggernaut, especially in a few years when it becomes clear to most people just how useful AI-based products and services can be.
Another related argument is that elites don’t want themselves and their children to be killed by the AI juggernaut any more than the masses do: its not like castles (1000 years ago) or taxes on luxury goods where the interests of the elites are fundamentally opposed to the interests of the masses. Elites (being smarter) are easier to explain the danger to than the masses are, so the more control we can give the elites relative to the masses, the better our chances of surviving the AI juggernaut, it seems to me. But IMHO we’ve strayed too far from the original topic of Harris vs Trump, and one sign that we’ve strayed too far is that IMHO Harris’s winning would strengthen US elites a little more than a Trump win would.
I agree that a war would absorb much of the nation’s political attention, but I believe that that effect would be more than cancelled out by how much easier it would become to pass new laws or new regulations. To give an example, the railroads were in 1914 as central to the US economy as the Internet is today—or close to it—and Washington nationalized the railroads during WWI, something that simply would not have been politically possible during peacetime.
You write that a war would consume “vast material resources for months or years”. Please explain what good material resources do, e.g., in the hands of governments, to help stop or render safe the AI juggernaut? It seems to me that if we could somehow reduce worldwide material-resource availability by a factor of 5 or even 10, our chances of surviving AI get much better: resources would need to be focused on maintaining the basic infrastructure keeping people safe and alive (e.g., mechanized farming, police forces) with the result that there wouldn’t be any resources left over to do huge AI training runs or to keep on fabbing ever more efficient GPUs.
I hope I am not being perceived as an intrinsically authoritarian person who has seized on the danger from AI as an opportunity to advocate for his favored policy of authoritarianism. As soon as the danger from AI is passed, I will go right back to being what in the US is called a moderate libertarian. But I can’t help but notice that we would be a lot more likely to survive the AI juggernaut if all of the world’s governments were as authoritarian as for example the Kremlin is. That’s just the logic of the situation. AI is a truly revolutionary technology. Americans (and Western Europeans to a slightly lesser extent) are comfortable with revolutionary changes; Russia and China much less so. In fact, there is a good chance that as soon as Moscow and Beijing are assured that they will have “enough access to AI” to create a truly effective system of surveilling their own populations, they’ll lose interest in AI as long as they don’t think they need to continue to invest in it in order to stay competitive militarily and economically with the West. AI is capable of transforming society in rapid, powerful, revolutionary ways—which means that all other things being equal, Beijing (who main goal is to avoid anything that might be described as a revolution) and Moscow will tend to want to supress it as much as practical.
The kind of control and power Moscow and Beijing (and Tehran) have over their respective populations is highly useful for stopping those populations from contributing to the AI juggernaut. In contrast, American democracy and the American commitment to liberty makes the US relatively bad at using political power stopping some project or activity being done by its population. (And the US government was specifically designed by the Founding Fathers to make it a lot of hard work to impose any curb or control on the freedom of the American people). America’s freedom, particularly economic and intellectual freedom, in contrast, is highly helpful to the Enemy, namely, those working to make AI more powerful. If only more of the world’s countries were like Russia, China and Iran!
I used to be a huge admirer of the US Founding Fathers. Now that I know how dangerous the AI juggernaut is, I wish that Thomas Jefferson had choked on a chicken bone and died before he had the chance to exert any influence on the form of any government! (In the unlikely event that the danger from AI is successfully circumnavigated in my lifetime, I will probably go right back to being an admirer of Thomas Jefferson.) It seemed like a great idea at the time, but now that we know how dangerous AI is, we can see in retrospect that it was a bad idea and that the architects of the governmental systems of Russia, China and Iran were in a very real sense “more correct”: those choices of governmental architectures make it easier for humanity to survive the AI gotcha (which was completely hidden from any possibility of human perception at the time those governmental architectural decisions were made, but still, right is right).
I feel that the people who recognize the AI juggernaut for the potent danger that it is are compartmentalizing their awareness of the danger in a regrettable way. Maybe a little exercise would be helpful. Do you admire the inventors of the transistor? Still? I used to, but no longer do. If William Shockley had slipped on a banana peel and hit his head on something sharp and died before he had a chance to assist in the invention of the transistor, that would have been a good thing, I now believe, because the invention of the transistor would have been delayed—by an expected 5 years in my estimation—giving humanity more time to become collectively wiser before it must confront the great danger of AI. Of course, we cannot hold Shockley morally responsible because there is no way he could have known about the AI danger. But still, if your awareness of the potency of the danger from AI doesn’t cause you to radically re-evaluate the goodness or badness of the invention of the transistor, then you’re showing a regrettable lapse in rationality IMHO. Ditto most advances in computing. The Soviets distrusted information technology. The Soviets were right—probably for the wrong reason, but right is still right, and no one who recognizes AI for the potent danger it is should continue to use the Soviet distrust of info tech as a point against them.
(This comment is addressed to those readers who consider AI to be so dangerous as to make AI risk the primary consideration in this conversation. I say that to account for the possbility that the OP cares mainly about more mundane political concerns and brought up AI safety because he (wrongly IMHO) believes it will help him make his argument.)
Your reasoning makes sense with regards to how a more authoritarian government would make it more likely that we can avoid x-risk, but how do you weigh that against the possibility that an AGI that is intent-aligned (but willing to accept harmful commands) would be more likely to create s-risks in the hands of an authoritarian state, as the post author has alluded to?
Also, what do you make of the author’s comment below?
Some people are more concerned about S-risk than extinction risk, and I certainly don’t want to dismiss them or imply that their concerns are mistaken or invalid, but I just find it a lot less likely that the AI project will lead to massive human suffering than its leading to human extinction.
There’s a huge gulf between people’s expressing concern about AI to pollsters and the kind of regulations and shutdowns that would actually avert extinction. The people (including the “safety” people) whose careers would be set back by many years if they had to find employment outside the AI field and the people who’ve invested a few hundred billion into AI are a powerful lobbying group in opposition to the members of the general public who tell pollsters they are concerned.
I don’t actually know enough about the authoritarian countries (e.g., Russia, China, Iran) to predict with any confidence how likely they are to prevent their populations from contributing to human extinction through AI. I can’t help but notice though that so far the US and the UK have done the most to advance the AI project. Also, the government’s deciding to shut down movements and technological trends is much more normalized and accepted in Russia, China and Iran than it is in the West, particularly in the US.
I don’t have any prescriptions really. I just think that the OP (titled “why the 2024 election matters, the AI risk case for Harris, & what you can do to help”, currently standing at 23 points) is badly thought out and badly reasoned, and I wish I had called for readers to downvote it because it encourages people to see everything through the Dem-v-Rep lens (even AI extinction risk, whose causal dependence on the election we don’t actually know) without contributing anything significant.