RSS

Reg­u­la­tion and AI Risk

TagLast edit: Dec 30, 2024, 10:15 AM by Dakara

Regulation and AI Risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to skirt the regulations, and instead propose government funding (possibly in the form of an “AI Manhattan Project”) for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is—AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

See also

Ways I Ex­pect AI Reg­u­la­tion To In­crease Ex­tinc­tion Risk

1a3ornJul 4, 2023, 5:32 PM
231 points
32 comments7 min readLW link

Q&A on Pro­posed SB 1047

ZviMay 2, 2024, 3:10 PM
74 points
8 comments44 min readLW link
(thezvi.wordpress.com)

RTFB: Cal­ifor­nia’s AB 3211

ZviJul 30, 2024, 1:10 PM
62 points
2 comments11 min readLW link
(thezvi.wordpress.com)

How ma­jor gov­ern­ments can help with the most im­por­tant century

HoldenKarnofskyFeb 24, 2023, 6:20 PM
29 points
0 comments4 min readLW link
(www.cold-takes.com)

Guide to SB 1047

ZviAug 20, 2024, 1:10 PM
71 points
18 comments53 min readLW link
(thezvi.wordpress.com)

[Linkpost] Chi­nese gov­ern­ment’s guidelines on AI

RomanSDec 10, 2021, 9:10 PM
61 points
14 comments1 min readLW link

Mid­dle Child Phenomenon

PhilosophicalSoulMar 15, 2024, 8:47 PM
3 points
3 comments2 min readLW link

Re: An­thropic’s sug­gested SB-1047 amendments

RobertMJul 27, 2024, 10:32 PM
87 points
13 comments9 min readLW link
(www.documentcloud.org)

[Question] What Are Your Prefer­ences Re­gard­ing The FLI Let­ter?

JenniferRMApr 1, 2023, 4:52 AM
−4 points
122 comments16 min readLW link

AI Sum­mer Harvest

Cleo NardoApr 4, 2023, 3:35 AM
130 points
10 comments1 min readLW link

Li­a­bil­ity regimes for AI

Ege ErdilAug 19, 2024, 1:25 AM
152 points
34 comments5 min readLW link

Ex­ces­sive AI growth-rate yields lit­tle so­cio-eco­nomic benefit.

Cleo NardoApr 4, 2023, 7:13 PM
27 points
22 comments4 min readLW link

AI Reg­u­la­tion is Unsafe

Maxwell TabarrokApr 22, 2024, 4:37 PM
40 points
41 comments4 min readLW link
(www.maximum-progress.com)

List of re­quests for an AI slow­down/​halt.

Cleo NardoApr 14, 2023, 11:55 PM
46 points
6 comments1 min readLW link

Cal­ifor­ni­ans, tell your reps to vote yes on SB 1047!

Holly_ElmoreAug 12, 2024, 7:50 PM
40 points
24 comments1 min readLW link

Thoughts on SB-1047

ryan_greenblattMay 29, 2024, 11:26 PM
59 points
1 comment11 min readLW link

[Question] Have any par­ties in the cur­rent Euro­pean Par­li­a­men­tary Elec­tion made pub­lic state­ments on AI?

MondSemmelMay 10, 2024, 10:22 AM
9 points
0 comments1 min readLW link

Open Agency model can solve the AI reg­u­la­tion dilemma

Roman LeventovNov 8, 2023, 8:00 PM
22 points
1 comment2 min readLW link

Why liber­tar­i­ans are ad­vo­cat­ing for reg­u­la­tion on AI

RobertMJun 14, 2023, 8:59 PM
35 points
13 comments4 min readLW link

Learn­ing so­cietal val­ues from law as part of an AGI al­ign­ment strategy

John NayOct 21, 2022, 2:03 AM
5 points
18 comments54 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 3:00 AM
130 points
18 comments62 min readLW link

Self-reg­u­la­tion of safety in AI research

Gordon Seidoh WorleyFeb 25, 2018, 11:17 PM
12 points
6 comments2 min readLW link

An­titrust-Com­pli­ant AI In­dus­try Self-Regulation

CullenJul 7, 2020, 8:53 PM
9 points
3 comments1 min readLW link
(cullenokeefe.com)

Let’s think about slow­ing down AI

KatjaGraceDec 22, 2022, 5:40 PM
551 points
182 comments38 min readLW link3 reviews
(aiimpacts.org)

AGI in sight: our look at the game board

Feb 18, 2023, 10:17 PM
227 points
135 comments6 min readLW link
(andreamiotti.substack.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus AstraApr 16, 2020, 12:50 AM
58 points
27 comments89 min readLW link

The Schumer Re­port on AI (RTFB)

ZviMay 24, 2024, 3:10 PM
34 points
3 comments36 min readLW link
(thezvi.wordpress.com)

[Linkpost] Scott Alexan­der re­acts to OpenAI’s lat­est post

AkashMar 11, 2023, 10:24 PM
27 points
0 comments5 min readLW link
(astralcodexten.substack.com)

An­thropic’s Cer­tifi­cate of Incorporation

Zach Stein-PerlmanJun 12, 2024, 1:00 PM
115 points
7 comments4 min readLW link

Most Peo­ple Don’t Real­ize We Have No Idea How Our AIs Work

Thane RuthenisDec 21, 2023, 8:02 PM
159 points
42 comments1 min readLW link

Cruxes on US lead for some do­mes­tic AI regulation

Zach Stein-PerlmanSep 10, 2023, 6:00 PM
26 points
3 comments2 min readLW link

My takes on SB-1047

leogaoSep 9, 2024, 6:38 PM
151 points
8 comments4 min readLW link

AI com­pa­nies are un­likely to make high-as­surance safety cases if timelines are short

ryan_greenblattJan 23, 2025, 6:41 PM
143 points
5 comments13 min readLW link

Ver­ifi­ca­tion meth­ods for in­ter­na­tional AI agreements

AkashAug 31, 2024, 2:58 PM
14 points
1 comment4 min readLW link
(arxiv.org)

Disagree­ments over the pri­ori­ti­za­tion of ex­is­ten­tial risk from AI

Olivier CoutuOct 26, 2023, 5:54 PM
10 points
0 comments6 min readLW link

Re­sponse to “Co­or­di­nated paus­ing: An eval­u­a­tion-based co­or­di­na­tion scheme for fron­tier AI de­vel­op­ers”

Matthew WeardenOct 30, 2023, 5:27 PM
5 points
2 comments6 min readLW link
(matthewwearden.co.uk)

A list of all the dead­lines in Bi­den’s Ex­ec­u­tive Order on AI

Valentin BaltadzhievNov 1, 2023, 5:14 PM
26 points
2 comments11 min readLW link

On ex­clud­ing dan­ger­ous in­for­ma­tion from training

ShayBenMosheNov 17, 2023, 11:14 AM
23 points
5 comments3 min readLW link

[Question] How much of a con­cern are open-source LLMs in the short, medium and long terms?

JavierCCMay 10, 2023, 9:14 AM
5 points
0 comments1 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher KingMay 13, 2023, 10:49 PM
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

Brief notes on the Se­nate hear­ing on AI oversight

DizietMay 16, 2023, 10:29 PM
77 points
2 comments2 min readLW link

Rishi Su­nak men­tions “ex­is­ten­tial threats” in talk with OpenAI, Deep­Mind, An­thropic CEOs

May 24, 2023, 9:06 PM
34 points
1 comment1 min readLW link
(www.gov.uk)

Why Job Dis­place­ment Pre­dic­tions are Wrong: Ex­pla­na­tions of Cog­ni­tive Automation

Moritz WallawitschMay 30, 2023, 8:43 PM
−4 points
0 comments8 min readLW link

What ex­actly does ‘Slow Down’ look like?

Steve MJun 3, 2023, 6:11 PM
7 points
0 comments1 min readLW link

One im­ple­men­ta­tion of reg­u­la­tory GPU restrictions

porbyJun 4, 2023, 8:34 PM
42 points
6 comments5 min readLW link

An­thropic | Chart­ing a Path to AI Accountability

Gabe MJun 14, 2023, 4:43 AM
34 points
2 comments3 min readLW link
(www.anthropic.com)

Ban de­vel­op­ment of un­pre­dictable pow­er­ful mod­els?

TurnTroutJun 20, 2023, 1:43 AM
46 points
25 comments4 min readLW link

EU AI Act passed Ple­nary vote, and X-risk was a main topic

Ariel G.Jun 21, 2023, 6:33 PM
17 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Fron­tier AI Regulation

Zach Stein-PerlmanJul 10, 2023, 2:30 PM
21 points
4 comments8 min readLW link
(arxiv.org)

An up­com­ing US Supreme Court case may im­pede AI gov­er­nance efforts

NickGabsJul 16, 2023, 11:51 PM
57 points
17 comments2 min readLW link

Scal­ing and Sus­tain­ing Stan­dards: A Case Study on the Basel Accords

Conrad K.Jul 16, 2023, 10:01 PM
8 points
1 comment7 min readLW link
(docs.google.com)

Rus­sian par­li­a­men­tar­ian: let’s ban per­sonal com­put­ers and the Internet

RomanSJul 25, 2023, 5:30 PM
11 points
6 comments2 min readLW link

If we had known the at­mo­sphere would ignite

JeffsAug 16, 2023, 8:28 PM
59 points
63 comments2 min readLW link

AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

otto.bartenAug 24, 2023, 11:41 AM
65 points
39 comments5 min readLW link

Re­port on Fron­tier Model Training

YafahEdelmanAug 30, 2023, 8:02 PM
122 points
21 comments21 min readLW link
(docs.google.com)

Against the Open Source /​ Closed Source Di­chotomy: Reg­u­lated Source as a Model for Re­spon­si­ble AI Development

alex.herwixSep 4, 2023, 8:25 PM
4 points
12 comments6 min readLW link
(forum.effectivealtruism.org)

[Linkpost] Mark Zucker­berg con­fronted about Meta’s Llama 2 AI’s abil­ity to give users de­tailed guidance on mak­ing an­thrax—Busi­ness Insider

micSep 26, 2023, 12:05 PM
18 points
11 comments2 min readLW link
(www.businessinsider.com)

A New Model for Com­pute Cen­ter Verification

Damin CurtisOct 10, 2023, 7:22 PM
8 points
0 comments5 min readLW link

Mud­dling Along Is More Likely Than Dystopia

Jeffrey HeningerOct 20, 2023, 9:25 PM
88 points
10 comments8 min readLW link

Thoughts on Hard­ware limits to Prevent AGI?

jrincaycOct 15, 2023, 11:45 PM
4 points
2 comments9 min readLW link

UNGA Gen­eral De­bate speeches on AI

Odd anonOct 16, 2023, 6:36 AM
6 points
0 comments21 min readLW link

Mauhn Re­leases AI Safety Documentation

Berg SeverensJul 3, 2021, 9:23 PM
4 points
0 comments1 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJulesNov 11, 2021, 7:04 AM
2 points
2 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

YitzApr 10, 2022, 9:02 PM
92 points
141 comments2 min readLW link

The Reg­u­la­tory Op­tion: A re­sponse to near 0% sur­vival odds

Matthew LowensteinApr 11, 2022, 10:00 PM
46 points
21 comments6 min readLW link

A Cri­tique of AI Align­ment Pessimism

ExCephJul 19, 2022, 2:28 AM
9 points
1 comment9 min readLW link

A Kind­ness, or The Inevitable Con­se­quence of Perfect In­fer­ence (a short story)

samhealyDec 12, 2023, 11:03 PM
6 points
0 comments9 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc10014Sep 14, 2022, 8:37 PM
9 points
0 comments16 min readLW link

Lev­er­ag­ing Le­gal In­for­mat­ics to Align AI

John NaySep 18, 2022, 8:39 PM
11 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencerSep 20, 2022, 5:53 PM
1 point
0 comments4 min readLW link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony BarrettOct 14, 2022, 8:27 PM
2 points
0 comments2 min readLW link

The Slip­pery Slope from DALLE-2 to Deep­fake Anarchy

scasperNov 5, 2022, 2:53 PM
17 points
9 comments11 min readLW link

[Question] Is there any policy for a fair treat­ment of AIs whose friendli­ness is in doubt?

nahojNov 18, 2022, 7:01 PM
15 points
10 comments1 min readLW link

Is­sues with un­even AI re­source distribution

User_LukeDec 24, 2022, 1:18 AM
3 points
9 comments5 min readLW link
(temporal.substack.com)

Who Aligns the Align­ment Re­searchers?

Ben SmithMar 5, 2023, 11:22 PM
48 points
0 comments11 min readLW link

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Valentin2026Aug 4, 2022, 7:12 PM
5 points
1 comment1 min readLW link

Nav­i­gat­ing the Nexus of AGI, Ethics, and Hu­man Sur­vival: A Math­e­mat­i­cal Inquiry

Kan YuenyongFeb 29, 2024, 6:47 AM
1 point
0 comments3 min readLW link

s/​acc: Safe Ac­cel­er­a­tionism Manifesto

lorepieriDec 19, 2023, 10:19 PM
−4 points
5 comments2 min readLW link
(lorenzopieri.com)

AI safety ad­vo­cates should con­sider pro­vid­ing gen­tle push­back fol­low­ing the events at OpenAI

civilsocietyDec 22, 2023, 6:55 PM
16 points
5 comments3 min readLW link

[Question] Which bat­tles should a young per­son pick?

Immanuel JankvistDec 29, 2023, 8:28 PM
14 points
5 comments1 min readLW link

AI, In­tel­lec­tual Prop­erty, and the Techno-Op­ti­mist Revolution

Justin-DiamondJan 31, 2024, 6:30 PM
1 point
0 comments1 min readLW link
(www.researchgate.net)

AI-gen­er­ated opi­oids could be a catas­trophic risk

ejk64Mar 20, 2024, 5:48 PM
0 points
2 comments3 min readLW link

Com­par­ing Align­ment to other AGI in­ter­ven­tions: Ba­sic model

Martín SotoMar 20, 2024, 6:17 PM
12 points
4 comments7 min readLW link

[Question] How does the ever-in­creas­ing use of AI in the mil­i­tary for the di­rect pur­pose of mur­der­ing peo­ple af­fect your p(doom)?

JustausernameApr 6, 2024, 6:31 AM
19 points
16 comments1 min readLW link

Cy­ber­se­cu­rity of Fron­tier AI Models: A Reg­u­la­tory Review

Apr 25, 2024, 2:51 PM
8 points
0 comments8 min readLW link

Re­view­ing the Struc­ture of Cur­rent AI Regulations

May 7, 2024, 12:34 PM
29 points
0 comments13 min readLW link

The Greater Goal: Shar­ing Knowl­edge with the Cosmos

pda.everydayMay 14, 2024, 10:46 PM
0 points
1 comment2 min readLW link

AI 2030 – AI Policy Roadmap

LTMMay 17, 2024, 11:29 PM
8 points
0 comments1 min readLW link

The Dou­ble Body Paradigm: What Comes After ASI Align­ment?

De_Carvalho_LoickDec 14, 2024, 6:09 PM
1 point
0 comments6 min readLW link

The AI Driver’s Li­cence—A Policy Proposal

Jul 21, 2024, 8:38 PM
0 points
1 comment19 min readLW link

Case Story: Lack of Con­sumer Pro­tec­tion Pro­ce­dures AI Ma­nipu­la­tion and the Threat of Fund Con­cen­tra­tion in Crypto Seek­ing As­sis­tance to Fund a Civil Case to Estab­lish Facts and Pro­tect Vuln­er­a­ble Con­sumers from Da­m­age Caused by Au­to­mated Sys­tems

Petr 'Margot' AndreevAug 8, 2024, 5:55 AM
−9 points
0 comments9 min readLW link

The AI reg­u­la­tor’s toolbox: A list of con­crete AI gov­er­nance practices

Adam JonesAug 10, 2024, 9:15 PM
8 points
1 comment34 min readLW link
(adamjones.me)

Why hu­mans won’t con­trol su­per­hu­man AIs.

Spiritus DeiOct 16, 2024, 4:48 PM
−11 points
1 comment6 min readLW link

OpenAI’s cy­ber­se­cu­rity is prob­a­bly reg­u­lated by NIS Regulations

Adam JonesOct 25, 2024, 11:06 AM
11 points
2 comments2 min readLW link
(adamjones.me)

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

otto.bartenNov 15, 2024, 1:59 PM
10 points
8 comments3 min readLW link
(time.com)

The U.S. Na­tional Se­cu­rity State is Here to Make AI Even Less Trans­par­ent and Accountable

Matrice JacobineNov 24, 2024, 9:36 AM
0 points
0 comments2 min readLW link
(www.eff.org)

Should you in­crease AI al­ign­ment fund­ing, or in­crease AI reg­u­la­tion?

Knight LeeNov 26, 2024, 9:17 AM
3 points
1 comment4 min readLW link

How to solve the mi­suse prob­lem as­sum­ing that in 10 years the de­fault sce­nario is that AGI agents are ca­pa­ble of syn­thetiz­ing pathogens

jeremttiNov 27, 2024, 9:17 PM
6 points
0 comments9 min readLW link

AI Train­ing Opt-Outs Re­in­force Global Power Asymmetries

kushagraNov 30, 2024, 10:08 PM
3 points
0 comments6 min readLW link

Where Would Good Fore­casts Most Help AI Gover­nance Efforts?

Violet HourFeb 11, 2025, 6:15 PM
3 points
0 comments6 min readLW link

The open letter

kornaiMar 29, 2023, 3:09 PM
−21 points
2 comments1 min readLW link

The 0.2 OOMs/​year target

Cleo NardoMar 30, 2023, 6:15 PM
84 points
24 comments5 min readLW link