Thanks for writing this. Stories like this help me understand possibilities for the future (and understand how others think).
The US and many other Western governments are gears-locked, because the politicians are products of this memetic environment. People say it’s a miracle that the US isn’t in a civil war already.
So far in your vignette, AI is sufficiently important and has sufficient public attention that any functional government would be (1) regulating it, or at least exerting pressure on the shape of AI through the possibility of regulation, and especially (2) appreciating the national security implications of near-future AI. But in your vignette, governments fail to respond meaningfully to AI; they aren’t part of the picture (so far). This would surprise me. I don’t understand how epistemic decline on the internet translates into governments’ failure to respond to AI. How do you imagine this happening? I expect that the US federal government will be very important in the next decade, so I’m very interested in better understanding possibilities.
Also: does epistemic decline and social dysfunction affect AI companies?
Excellent questions & pushback, thanks! Hmm, let me think...
I think that if we had anything close to an adequate government, AI research would be heavily regulated already. So I’m not sure what you mean by “functional government.” I guess you are saying that probably by 2026 if things go according to my story, the US government would be provoked by all the AI advancements to do more regulation of AI, a lot more, that would change the picture in some way and thus be worthy of mention?
I guess my expectation is:
(a) The government will have “woken up” to AI by 2026 more than it has by 2021. It will be attempting to pass more regulations as a result.
(b) However, from the perspective of the government and mainstream USA, things in 2026 aren’t that different from how they were in 2021. There’s more effective censorship/propaganda and now there’s all these nifty chatbot things, and there’s more hype about the impending automation of loads of jobs but whatever hype cycles come and go and large numbers of jobs aren’t actually being automated away yet.
(c) Everything will be hyperpartisan and polarized, so that in order to pass any regulation about AI the government will need to have a big political fight between Right and Left and whoever gets more votes wins.
(d) What regulations the government does pass will probably be ineffective or directed at the wrong goals. For example, when one party finally gets enough votes to pass stuff, they’ll focus on whatever issues were most memetically fit in the latest news cycle rather than on the issues that actually matter long-term. On the issues that actually matter, meanwhile, they’ll be listening to the wrong experts. (those with political clout, fame, and the right credentials and demographics, rather than e.g. those with lots of alignment forum karma)
Thus, I expect that no regulations of note relating to AI safety or alignment will have been passed. For censorship and stuff, I expect that the left will be trying to undo right-wing censorship and propaganda while strengthening their own, and the right will be trying to undo left-wing censorship and propaganda while strengthening their own. I cynically expect the result to be that both sides get stronger censorship and propaganda within their own territory, whatever that turns out to be. (There’ll be battlegrounds, contested internet territories where maybe no censorship can thrive. Or maybe the Left will conquer all the territory. Idk. I guess in this story they both get some territory.)
Yep, epistemic dysfunction affects AI companies too. It hasn’t progressed to the point where it is more than a 0.5X slowdown on their capabilities research, however. It’s more like a 0.9X slowdown I’d say. This is a thing worth thinking more about and modelling in more detail for sure.
I tentatively agree but “people realizing it’s more dangerous than nukes” has potential negative consequences too — an arms race is the default outcome of such national security threats/opportunities. I’ve recently been trying to think about different memes about AI and their possible effects… it’s possible that memes like “powerful AI is fragile” could get the same regulation and safety work with less arms racing.
How about “AI is like summoning a series of increasingly powerful demons/aliens and trying to make them do what we want by giving them various punishments and rewards?”
Consequences (in expectation) if widely accepted: very good.
Compressibility: poor (at least, good compressions are not obvious).
Probability of (a compressed version) becoming widely accepted or Respectable Opinion: moderately low due to weirdness. Less weird explanations of why AI might not do what we want would be more Respectable and acceptable.
Leverage (i.e., increase in that probability from increased marginal effort to promote that meme): uncertain.
I disagree about compressibility; Elon said “AI is summoning the demon” and that’s a five-word phrase that seems to have been somewhat memorable and memetically fit. I think if we had a good longer piece of content that expressed the idea that lots of people could read/watch/play then that would probably be enough.
Thanks for writing this. Stories like this help me understand possibilities for the future (and understand how others think).
So far in your vignette, AI is sufficiently important and has sufficient public attention that any functional government would be (1) regulating it, or at least exerting pressure on the shape of AI through the possibility of regulation, and especially (2) appreciating the national security implications of near-future AI. But in your vignette, governments fail to respond meaningfully to AI; they aren’t part of the picture (so far). This would surprise me. I don’t understand how epistemic decline on the internet translates into governments’ failure to respond to AI. How do you imagine this happening? I expect that the US federal government will be very important in the next decade, so I’m very interested in better understanding possibilities.
Also: does epistemic decline and social dysfunction affect AI companies?
Excellent questions & pushback, thanks! Hmm, let me think...
I think that if we had anything close to an adequate government, AI research would be heavily regulated already. So I’m not sure what you mean by “functional government.” I guess you are saying that probably by 2026 if things go according to my story, the US government would be provoked by all the AI advancements to do more regulation of AI, a lot more, that would change the picture in some way and thus be worthy of mention?
I guess my expectation is:
(a) The government will have “woken up” to AI by 2026 more than it has by 2021. It will be attempting to pass more regulations as a result.
(b) However, from the perspective of the government and mainstream USA, things in 2026 aren’t that different from how they were in 2021. There’s more effective censorship/propaganda and now there’s all these nifty chatbot things, and there’s more hype about the impending automation of loads of jobs but whatever hype cycles come and go and large numbers of jobs aren’t actually being automated away yet.
(c) Everything will be hyperpartisan and polarized, so that in order to pass any regulation about AI the government will need to have a big political fight between Right and Left and whoever gets more votes wins.
(d) What regulations the government does pass will probably be ineffective or directed at the wrong goals. For example, when one party finally gets enough votes to pass stuff, they’ll focus on whatever issues were most memetically fit in the latest news cycle rather than on the issues that actually matter long-term. On the issues that actually matter, meanwhile, they’ll be listening to the wrong experts. (those with political clout, fame, and the right credentials and demographics, rather than e.g. those with lots of alignment forum karma)
Thus, I expect that no regulations of note relating to AI safety or alignment will have been passed. For censorship and stuff, I expect that the left will be trying to undo right-wing censorship and propaganda while strengthening their own, and the right will be trying to undo left-wing censorship and propaganda while strengthening their own. I cynically expect the result to be that both sides get stronger censorship and propaganda within their own territory, whatever that turns out to be. (There’ll be battlegrounds, contested internet territories where maybe no censorship can thrive. Or maybe the Left will conquer all the territory. Idk. I guess in this story they both get some territory.)
Yep, epistemic dysfunction affects AI companies too. It hasn’t progressed to the point where it is more than a 0.5X slowdown on their capabilities research, however. It’s more like a 0.9X slowdown I’d say. This is a thing worth thinking more about and modelling in more detail for sure.
What do you think sensible AI safety regulation would entail?
I don’t know, I haven’t thought much about it. I’d love it if people realized it’s more dangerous than nukes and treated it accordingly.
I tentatively agree but “people realizing it’s more dangerous than nukes” has potential negative consequences too — an arms race is the default outcome of such national security threats/opportunities. I’ve recently been trying to think about different memes about AI and their possible effects… it’s possible that memes like “powerful AI is fragile” could get the same regulation and safety work with less arms racing.
How about “AI is like summoning a series of increasingly powerful demons/aliens and trying to make them do what we want by giving them various punishments and rewards?”
Consequences (in expectation) if widely accepted: very good.
Compressibility: poor (at least, good compressions are not obvious).
Probability of (a compressed version) becoming widely accepted or Respectable Opinion: moderately low due to weirdness. Less weird explanations of why AI might not do what we want would be more Respectable and acceptable.
Leverage (i.e., increase in that probability from increased marginal effort to promote that meme): uncertain.
I disagree about compressibility; Elon said “AI is summoning the demon” and that’s a five-word phrase that seems to have been somewhat memorable and memetically fit. I think if we had a good longer piece of content that expressed the idea that lots of people could read/watch/play then that would probably be enough.