AI Labs Wouldn’t be Convicted of Treason or Sedition
This is a shortened version of “Preventing AI from Overthrowing the Government” from my Substack to focus on the things that would be interesting to LessWrong. Full version with citations can be found here.
Introduction
Many AI labs, including OpenAI, are forthright in saying that their goal is to create AGI. An AI like that would be able to use its intelligence to accomplish goals. It would then be able to reshape the world around it either according to the wishes of its creators or in contradiction if it is not properly aligned. Former OpenAI employee Leopold Aschenbrenner recently wrote that “it’s quite plausible individual CEOs would have the power to literally coup the US government”. Plus, if AGI could take over the world from humans, as many expect it to do, it could accomplish the strictly easier task of taking over control from the US government.
The Constitution’s Supremacy Clause declares that “This Constitution, and the Laws of the United States… shall be the supreme Law of the Land.” This raises the question that if researchers are developing AGI that can reach the capability level to take over the government then the creators of the system might be tried for treason or sedition. Treason is defined as going to war against the US or assisting someone who is, and sedition is using or planning to use force against the government. While the development of powerful AI systems with the potential to reshape the world is extremely worrying and dangerous, the legal precedent suggests that current AI labs and researchers would not be found guilty of either. Other regulations must ensure that the government maintains control over insurgent AI.
The Inapplicability of Treason and Sedition
So far, no court has ruled on the applicability of treason or sedition laws to AI labs, but courts base their decisions on previous precedents to flexibly apply old laws to new situations. Courts would rule in favor of AI labs due to precedents that address free speech, the imminent lawless action standard, and the degree of burden of proof necessary to convict on treason or sedition.
Free Speech
AI labs and researchers have stated that they want to create AGI that is powerful enough that it may disrupt government power. The First Amendment of the Constitution states that “Congress shall make no law… abridging the freedom of speech…” There are notable exceptions to this rule including “incitement, defamation, fraud, obscenity… fighting words, and threats.” The speech of AI labs and researchers does not fall under one of these exceptions due to the “imminent lawless action standard.”
The precedent-setting court cases of Brandenburg v. Ohio and Hess v. Indiana established the imminent lawless action standard for whether or not speech advocating for use of force or crime is protected by the First Amendment. In Brandenburg v. Ohio, the Supreme Court ruled that a speech from the leader of the KKK advocating for violence was protected by the First Amendment because the government may only prohibit speech that is “directed to inciting or producing imminent lawless action” and “likely to incite or produce such action.” Brandenburg’s speech didn’t meet the imminent lawless action criteria because it was not advocating for immediate violence. This is the court case that defined this standard. Five years later, in Hess v. Indiana, the court once again ruled on the side of free speech. Hess, an antiwar protestor, shouted “We’ll take the fucking street later” to a cop who was clearing the protesters from the area. The Supreme Court decided that although Hess had been advocating for lawless action, it was at an indefinite future time, so it was not an immediate threat and therefore counted as protected speech.
Nobody really knows when AGI will be successfully invented. Because of this standard, most speech related to AI and what it could or should do is not incitement and is therefore protected by the First Amendment. That means some rather extreme cases are protected. For example, the founder of the AI Lab Extropic, who goes by the pseudonym Beff Jezos on Twitter, has said that it is fine if AI kills everyone. The user received no consequences for such a statement.
AI Lab Actions
Treason Charges
Potentially treasonous actions are distinct from speech. However, it is incredibly difficult to get a charge of sedition or treason to stick. In the entirety of U.S. history, the government has only successfully convicted fewer than twelve Americans for treason. Part of the reason is that the standard for conviction is extremely high. This is because the Founding Fathers necessitated that treason charges had to have an extra high burden of proof. To stick, it requires “open confession in court” or “the testimony of two witnesses to the same overt act.” This ensures that the government cannot abuse the charge against opponents. Judges hearing a case where an AI lab was accused of treason would therefore likely interpret treason law conservatively and side with the labs.
Sedition Charges
The conviction of a leader of the January 6th Insurrection on the U.S. Capitol was the first time in more than two decades that courts convicted someone of seditious conspiracy. One of the requirements set out in Direct Sales Co. v. U.S. is that “charges of conspiracy are not to be made out by piling inference upon inference.” This means there needs to be a clear intention and agreement. Since the risks of AI are speculative and government overthrow via AGI is an unusual case, charges of seditious conspiracy brought against AI Labs would likely fail this standard.
Conclusion
Artificial intelligence is a novel technology that is introducing new risks to governments, which cannot easily be mapped to the familiar concepts of treason and sedition. Existing legislation includes some vigilant sections, but governments must continue to issue new rules and regulations to prevent AI or its creators from supplanting their control.
Is there a meaningful group of people who believed chief executives of leading AI labs would be convicted of treason or sedition? What reasoning went behind privileging this particular hypothesis as one worth looking at, rejecting, and writing a LW post about?
I personally thought that “taking actions that would give yourself more power than the government” was something that… seemed like it shouldn’t be allowed? Many people I talked to shared your perspective of “of course AI labs are in the green” but it wasn’t so obvious to me. I originally did the research in April and May, but since then there was the Situational Awareness report with the quote “it’s quite plausible individual CEOs would have the power to literally coup the US government.” I haven’t seen anyone else talking about this.
My reasoning for choosing to write about this topic went like this:
”They are gaining abilities which will allow them to overthrow the government.”
″What? Are they allowed to do that? Isn’t the government going to stop them?”
If I were in charge of a government, I sure wouldn’t want people doing things that would set them up to overthrow me. (And this is true even if that government gets its mandate from its citizens, like the US.)
Maybe the details of treason and sedition laws are more common knowledge than I thought, and everyone but me picked up how they worked from other sources?