RSS

Reg­u­la­tion and AI Risk

TagLast edit: 30 Sep 2020 23:44 UTC by Swimmer963 (Miranda Dixon-Luinenburg)

Regulation and AI risk is the debate on whether regulation could be used to reduce the risks of Unfriendly AI, and what forms of regulation would be appropriate.

Several authors have advocated AI research to be regulated, but been vague on the details. Yampolskiy & Fox (2012) note that university research programs in the social and medical sciences are overseen by institutional review boards, and propose setting up analogous review boards to evaluate potential AGI research. In order to be successful, AI regulation would have to be global, and there is the potential for an AI arms race between different nations. Partially because of this, McGinnis (2010) argues that the government should not attempt to regulate AGI development. Rather, it should concentrate on providing funding to research projects intended to create safe AGI. Kaushal & Nolan (2015) point out that regulations on AGI development would result in a speed advantage for any project willing to skirt the regulations, and instead propose government funding (possibly in the form of an “AI Manhattan Project”) for AGI projects meeting particular criteria.

While Shulman & Armstrong (2009) argue the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that molecular nanotechnology could make countries more self-reliant and international cooperation considerably harder, and that AGI could contribute to such a development. AGI technology is also much harder to detect than e.g. nuclear technology is—AGI research can be done in a garage, while nuclear weapons require a substantial infrastructure (McGinnis 2010). On the other hand, Scherer (2015) argues that artificial intelligence could nevertheless be susceptible to regulation due to the increasing prominence of governmental entities and large corporations in AI research and development.

Goertzel & Pitt (2012) suggest that for regulation to be enacted, there might need to be an AGI Sputnik moment—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take a very long time for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer.

References

See also

Ways I Ex­pect AI Reg­u­la­tion To In­crease Ex­tinc­tion Risk

1a3orn4 Jul 2023 17:32 UTC
227 points
32 comments7 min readLW link

[Linkpost] Chi­nese gov­ern­ment’s guidelines on AI

RomanS10 Dec 2021 21:10 UTC
61 points
14 comments1 min readLW link

How ma­jor gov­ern­ments can help with the most im­por­tant century

HoldenKarnofsky24 Feb 2023 18:20 UTC
29 points
0 comments4 min readLW link
(www.cold-takes.com)

Mid­dle Child Phenomenon

PhilosophicalSoul15 Mar 2024 20:47 UTC
3 points
3 comments2 min readLW link

Q&A on Pro­posed SB 1047

Zvi2 May 2024 15:10 UTC
74 points
8 comments44 min readLW link
(thezvi.wordpress.com)

[Question] Have any par­ties in the cur­rent Euro­pean Par­li­a­men­tary Elec­tion made pub­lic state­ments on AI?

MondSemmel10 May 2024 10:22 UTC
9 points
0 comments1 min readLW link

The Schumer Re­port on AI (RTFB)

Zvi24 May 2024 15:10 UTC
34 points
3 comments36 min readLW link
(thezvi.wordpress.com)

[Question] What Are Your Prefer­ences Re­gard­ing The FLI Let­ter?

JenniferRM1 Apr 2023 4:52 UTC
−4 points
122 comments16 min readLW link

AI Sum­mer Harvest

Cleo Nardo4 Apr 2023 3:35 UTC
130 points
10 comments1 min readLW link

Guide to SB 1047

Zvi20 Aug 2024 13:10 UTC
71 points
18 comments53 min readLW link
(thezvi.wordpress.com)

Ex­ces­sive AI growth-rate yields lit­tle so­cio-eco­nomic benefit.

Cleo Nardo4 Apr 2023 19:13 UTC
27 points
22 comments4 min readLW link

An­thropic’s Cer­tifi­cate of Incorporation

Zach Stein-Perlman12 Jun 2024 13:00 UTC
115 points
4 comments4 min readLW link

List of re­quests for an AI slow­down/​halt.

Cleo Nardo14 Apr 2023 23:55 UTC
46 points
6 comments1 min readLW link

My takes on SB-1047

leogao9 Sep 2024 18:38 UTC
151 points
8 comments4 min readLW link

Most Peo­ple Don’t Real­ize We Have No Idea How Our AIs Work

Thane Ruthenis21 Dec 2023 20:02 UTC
158 points
42 comments1 min readLW link

Re: An­thropic’s sug­gested SB-1047 amendments

RobertM27 Jul 2024 22:32 UTC
87 points
13 comments9 min readLW link
(www.documentcloud.org)

RTFB: Cal­ifor­nia’s AB 3211

Zvi30 Jul 2024 13:10 UTC
62 points
2 comments11 min readLW link
(thezvi.wordpress.com)

Open Agency model can solve the AI reg­u­la­tion dilemma

Roman Leventov8 Nov 2023 20:00 UTC
22 points
1 comment2 min readLW link

Ver­ifi­ca­tion meth­ods for in­ter­na­tional AI agreements

Akash31 Aug 2024 14:58 UTC
14 points
1 comment4 min readLW link
(arxiv.org)

Why liber­tar­i­ans are ad­vo­cat­ing for reg­u­la­tion on AI

RobertM14 Jun 2023 20:59 UTC
35 points
13 comments4 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

Learn­ing so­cietal val­ues from law as part of an AGI al­ign­ment strategy

John Nay21 Oct 2022 2:03 UTC
5 points
18 comments54 min readLW link

Self-reg­u­la­tion of safety in AI research

Gordon Seidoh Worley25 Feb 2018 23:17 UTC
12 points
6 comments2 min readLW link

An­titrust-Com­pli­ant AI In­dus­try Self-Regulation

Cullen7 Jul 2020 20:53 UTC
9 points
3 comments1 min readLW link
(cullenokeefe.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
58 points
27 comments89 min readLW link

Let’s think about slow­ing down AI

KatjaGrace22 Dec 2022 17:40 UTC
549 points
182 comments38 min readLW link3 reviews
(aiimpacts.org)

AGI in sight: our look at the game board

18 Feb 2023 22:17 UTC
225 points
135 comments6 min readLW link
(andreamiotti.substack.com)

Li­a­bil­ity regimes for AI

Ege Erdil19 Aug 2024 1:25 UTC
147 points
34 comments5 min readLW link

Cal­ifor­ni­ans, tell your reps to vote yes on SB 1047!

Holly_Elmore12 Aug 2024 19:50 UTC
40 points
24 comments1 min readLW link

[Linkpost] Scott Alexan­der re­acts to OpenAI’s lat­est post

Akash11 Mar 2023 22:24 UTC
27 points
0 comments5 min readLW link
(astralcodexten.substack.com)

AI Reg­u­la­tion is Unsafe

Maxwell Tabarrok22 Apr 2024 16:37 UTC
40 points
41 comments4 min readLW link
(www.maximum-progress.com)

Thoughts on SB-1047

ryan_greenblatt29 May 2024 23:26 UTC
59 points
1 comment11 min readLW link

Cruxes on US lead for some do­mes­tic AI regulation

Zach Stein-Perlman10 Sep 2023 18:00 UTC
26 points
3 comments2 min readLW link

[Question] How much of a con­cern are open-source LLMs in the short, medium and long terms?

JavierCC10 May 2023 9:14 UTC
5 points
0 comments1 min readLW link

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher King13 May 2023 22:49 UTC
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

Brief notes on the Se­nate hear­ing on AI oversight

Diziet16 May 2023 22:29 UTC
77 points
2 comments2 min readLW link

Rishi Su­nak men­tions “ex­is­ten­tial threats” in talk with OpenAI, Deep­Mind, An­thropic CEOs

24 May 2023 21:06 UTC
34 points
1 comment1 min readLW link
(www.gov.uk)

Why Job Dis­place­ment Pre­dic­tions are Wrong: Ex­pla­na­tions of Cog­ni­tive Automation

Moritz Wallawitsch30 May 2023 20:43 UTC
−4 points
0 comments8 min readLW link

RoboNet—A new in­ter­net pro­to­col for AI

antoniomax30 May 2023 17:55 UTC
−13 points
1 comment18 min readLW link

What ex­actly does ‘Slow Down’ look like?

Steve M3 Jun 2023 18:11 UTC
7 points
0 comments1 min readLW link

One im­ple­men­ta­tion of reg­u­la­tory GPU restrictions

porby4 Jun 2023 20:34 UTC
42 points
6 comments5 min readLW link

RAMP—RoboNet Ar­tifi­cial Me­dia Protocol

antoniomax7 Jun 2023 19:01 UTC
−1 points
0 comments19 min readLW link
(antoniomax.substack.com)

An­thropic | Chart­ing a Path to AI Accountability

Gabe M14 Jun 2023 4:43 UTC
34 points
2 comments3 min readLW link
(www.anthropic.com)

Ban de­vel­op­ment of un­pre­dictable pow­er­ful mod­els?

TurnTrout20 Jun 2023 1:43 UTC
46 points
25 comments4 min readLW link

EU AI Act passed Ple­nary vote, and X-risk was a main topic

Ariel G.21 Jun 2023 18:33 UTC
17 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Fron­tier AI Regulation

Zach Stein-Perlman10 Jul 2023 14:30 UTC
21 points
4 comments8 min readLW link
(arxiv.org)

An up­com­ing US Supreme Court case may im­pede AI gov­er­nance efforts

NickGabs16 Jul 2023 23:51 UTC
57 points
17 comments2 min readLW link

Scal­ing and Sus­tain­ing Stan­dards: A Case Study on the Basel Accords

Conrad K.16 Jul 2023 22:01 UTC
8 points
1 comment7 min readLW link
(docs.google.com)

Rus­sian par­li­a­men­tar­ian: let’s ban per­sonal com­put­ers and the Internet

RomanS25 Jul 2023 17:30 UTC
11 points
6 comments2 min readLW link

If we had known the at­mo­sphere would ignite

Jeffs16 Aug 2023 20:28 UTC
56 points
63 comments2 min readLW link

AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

otto.barten24 Aug 2023 11:41 UTC
65 points
39 comments5 min readLW link

Re­port on Fron­tier Model Training

YafahEdelman30 Aug 2023 20:02 UTC
122 points
21 comments21 min readLW link
(docs.google.com)

Against the Open Source /​ Closed Source Di­chotomy: Reg­u­lated Source as a Model for Re­spon­si­ble AI Development

alex.herwix4 Sep 2023 20:25 UTC
4 points
12 comments6 min readLW link
(forum.effectivealtruism.org)

[Linkpost] Mark Zucker­berg con­fronted about Meta’s Llama 2 AI’s abil­ity to give users de­tailed guidance on mak­ing an­thrax—Busi­ness Insider

mic26 Sep 2023 12:05 UTC
18 points
11 comments2 min readLW link
(www.businessinsider.com)

A New Model for Com­pute Cen­ter Verification

Damin Curtis10 Oct 2023 19:22 UTC
8 points
0 comments5 min readLW link

Mud­dling Along Is More Likely Than Dystopia

Jeffrey Heninger20 Oct 2023 21:25 UTC
83 points
10 comments8 min readLW link

Thoughts on Hard­ware limits to Prevent AGI?

jrincayc15 Oct 2023 23:45 UTC
4 points
1 comment9 min readLW link

UNGA Gen­eral De­bate speeches on AI

Odd anon16 Oct 2023 6:36 UTC
6 points
0 comments21 min readLW link

Mauhn Re­leases AI Safety Documentation

Berg Severens3 Jul 2021 21:23 UTC
4 points
0 comments1 min readLW link

Hard­code the AGI to need our ap­proval in­definitely?

MichaelStJules11 Nov 2021 7:04 UTC
2 points
2 comments1 min readLW link

[Question] Con­vince me that hu­man­ity is as doomed by AGI as Yud­kowsky et al., seems to believe

Yitz10 Apr 2022 21:02 UTC
92 points
141 comments2 min readLW link

The Reg­u­la­tory Op­tion: A re­sponse to near 0% sur­vival odds

Matthew Lowenstein11 Apr 2022 22:00 UTC
46 points
21 comments6 min readLW link

A Cri­tique of AI Align­ment Pessimism

ExCeph19 Jul 2022 2:28 UTC
9 points
1 comment9 min readLW link

A Kind­ness, or The Inevitable Con­se­quence of Perfect In­fer­ence (a short story)

samhealy12 Dec 2023 23:03 UTC
6 points
0 comments9 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc1001414 Sep 2022 20:37 UTC
8 points
0 comments16 min readLW link

Lev­er­ag­ing Le­gal In­for­mat­ics to Align AI

John Nay18 Sep 2022 20:39 UTC
11 points
0 comments3 min readLW link
(forum.effectivealtruism.org)

Cryp­tocur­rency Ex­ploits Show the Im­por­tance of Proac­tive Poli­cies for AI X-Risk

eSpencer20 Sep 2022 17:53 UTC
1 point
0 comments4 min readLW link

[Job]: AI Stan­dards Devel­op­ment Re­search Assistant

Tony Barrett14 Oct 2022 20:27 UTC
2 points
0 comments2 min readLW link

The Slip­pery Slope from DALLE-2 to Deep­fake Anarchy

scasper5 Nov 2022 14:53 UTC
17 points
9 comments11 min readLW link

[Question] Is there any policy for a fair treat­ment of AIs whose friendli­ness is in doubt?

nahoj18 Nov 2022 19:01 UTC
15 points
10 comments1 min readLW link

Is­sues with un­even AI re­source distribution

User_Luke24 Dec 2022 1:18 UTC
3 points
9 comments5 min readLW link
(temporal.substack.com)

Who Aligns the Align­ment Re­searchers?

Ben Smith5 Mar 2023 23:22 UTC
48 points
0 comments11 min readLW link

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Valentin20264 Aug 2022 19:12 UTC
5 points
1 comment1 min readLW link

Nav­i­gat­ing the Nexus of AGI, Ethics, and Hu­man Sur­vival: A Math­e­mat­i­cal Inquiry

Kan Yuenyong29 Feb 2024 6:47 UTC
1 point
0 comments3 min readLW link

s/​acc: Safe Ac­cel­er­a­tionism Manifesto

lorepieri19 Dec 2023 22:19 UTC
−4 points
5 comments2 min readLW link
(lorenzopieri.com)

AI safety ad­vo­cates should con­sider pro­vid­ing gen­tle push­back fol­low­ing the events at OpenAI

civilsociety22 Dec 2023 18:55 UTC
16 points
5 comments3 min readLW link

Open po­si­tions: Re­search An­a­lyst at the AI Stan­dards Lab

22 Dec 2023 16:31 UTC
17 points
0 comments1 min readLW link

[Question] Which bat­tles should a young per­son pick?

EmanuelJankvist29 Dec 2023 20:28 UTC
14 points
5 comments1 min readLW link

AI, In­tel­lec­tual Prop­erty, and the Techno-Op­ti­mist Revolution

Justin-Diamond31 Jan 2024 18:30 UTC
1 point
0 comments1 min readLW link
(www.researchgate.net)

AI-gen­er­ated opi­oids are a catas­trophic risk

ejk6420 Mar 2024 17:48 UTC
0 points
2 comments3 min readLW link

Com­par­ing Align­ment to other AGI in­ter­ven­tions: Ba­sic model

Martín Soto20 Mar 2024 18:17 UTC
12 points
4 comments7 min readLW link

[Question] How does the ever-in­creas­ing use of AI in the mil­i­tary for the di­rect pur­pose of mur­der­ing peo­ple af­fect your p(doom)?

Justausername6 Apr 2024 6:31 UTC
19 points
16 comments1 min readLW link

Cy­ber­se­cu­rity of Fron­tier AI Models: A Reg­u­la­tory Review

25 Apr 2024 14:51 UTC
8 points
0 comments8 min readLW link

Re­view­ing the Struc­ture of Cur­rent AI Regulations

7 May 2024 12:34 UTC
29 points
0 comments13 min readLW link

The Greater Goal: Shar­ing Knowl­edge with the Cosmos

pda.everyday14 May 2024 22:46 UTC
0 points
1 comment2 min readLW link

AI 2030 – AI Policy Roadmap

LTM17 May 2024 23:29 UTC
8 points
0 comments1 min readLW link

The AI Driver’s Li­cence—A Policy Proposal

21 Jul 2024 20:38 UTC
0 points
1 comment19 min readLW link

Case Story: Lack of Con­sumer Pro­tec­tion Pro­ce­dures AI Ma­nipu­la­tion and the Threat of Fund Con­cen­tra­tion in Crypto Seek­ing As­sis­tance to Fund a Civil Case to Estab­lish Facts and Pro­tect Vuln­er­a­ble Con­sumers from Da­m­age Caused by Au­to­mated Sys­tems

Petr Andreev8 Aug 2024 5:55 UTC
−9 points
0 comments9 min readLW link

The AI reg­u­la­tor’s toolbox: A list of con­crete AI gov­er­nance practices

Adam Jones10 Aug 2024 21:15 UTC
7 points
1 comment34 min readLW link
(adamjones.me)

Why hu­mans won’t con­trol su­per­hu­man AIs.

Spiritus Dei16 Oct 2024 16:48 UTC
−11 points
1 comment6 min readLW link

OpenAI’s cy­ber­se­cu­rity is prob­a­bly reg­u­lated by NIS Regulations

Adam Jones25 Oct 2024 11:06 UTC
11 points
2 comments2 min readLW link
(adamjones.me)

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

otto.barten15 Nov 2024 13:59 UTC
10 points
8 comments3 min readLW link
(time.com)

The open letter

kornai29 Mar 2023 15:09 UTC
−21 points
2 comments1 min readLW link

The 0.2 OOMs/​year target

Cleo Nardo30 Mar 2023 18:15 UTC
84 points
24 comments5 min readLW link

Disagree­ments over the pri­ori­ti­za­tion of ex­is­ten­tial risk from AI

Olivier Coutu26 Oct 2023 17:54 UTC
10 points
0 comments6 min readLW link

Re­sponse to “Co­or­di­nated paus­ing: An eval­u­a­tion-based co­or­di­na­tion scheme for fron­tier AI de­vel­op­ers”

Matthew Wearden30 Oct 2023 17:27 UTC
5 points
2 comments6 min readLW link
(matthewwearden.co.uk)

A list of all the dead­lines in Bi­den’s Ex­ec­u­tive Order on AI

Valentin Baltadzhiev1 Nov 2023 17:14 UTC
26 points
2 comments11 min readLW link

On ex­clud­ing dan­ger­ous in­for­ma­tion from training

ShayBenMoshe17 Nov 2023 11:14 UTC
23 points
5 comments3 min readLW link