RSS

Slow­ing Down AI

TagLast edit: 24 Dec 2022 9:12 UTC by Remmelt

Let’s think about slow­ing down AI

KatjaGrace22 Dec 2022 17:40 UTC
549 points
182 comments38 min readLW link3 reviews
(aiimpacts.org)

Slow­ing AI: Read­ing list

Zach Stein-Perlman17 Apr 2023 14:30 UTC
45 points
3 comments4 min readLW link

Slow­ing AI: Foundations

Zach Stein-Perlman17 Apr 2023 14:30 UTC
45 points
11 comments17 min readLW link

Lev­er­age points for a pause

Remmelt28 Aug 2024 9:21 UTC
3 points
0 comments1 min readLW link

The pub­lic sup­ports reg­u­lat­ing AI for safety

Zach Stein-Perlman17 Feb 2023 4:10 UTC
114 points
9 comments1 min readLW link
(aiimpacts.org)

What an ac­tu­ally pes­simistic con­tain­ment strat­egy looks like

lc5 Apr 2022 0:19 UTC
674 points
138 comments6 min readLW link2 reviews

Cruxes on US lead for some do­mes­tic AI regulation

Zach Stein-Perlman10 Sep 2023 18:00 UTC
26 points
3 comments2 min readLW link

AI Sum­mer Harvest

Cleo Nardo4 Apr 2023 3:35 UTC
130 points
10 comments1 min readLW link

Is prin­ci­pled mass-out­reach pos­si­ble, for AGI X-risk?

Nicholas / Heather Kross21 Jan 2024 17:45 UTC
9 points
5 comments3 min readLW link

New sur­vey: 46% of Amer­i­cans are con­cerned about ex­tinc­tion from AI; 69% sup­port a six-month pause in AI development

Akash5 Apr 2023 1:26 UTC
46 points
9 comments1 min readLW link
(today.yougov.com)

List of re­quests for an AI slow­down/​halt.

Cleo Nardo14 Apr 2023 23:55 UTC
46 points
6 comments1 min readLW link

Ways to buy time

12 Nov 2022 19:31 UTC
34 points
23 comments12 min readLW link

Ex­ces­sive AI growth-rate yields lit­tle so­cio-eco­nomic benefit.

Cleo Nardo4 Apr 2023 19:13 UTC
27 points
22 comments4 min readLW link

Most Peo­ple Don’t Real­ize We Have No Idea How Our AIs Work

Thane Ruthenis21 Dec 2023 20:02 UTC
158 points
42 comments1 min readLW link

My Cur­rent Thoughts on the AI Strate­gic Landscape

Jeffrey Heninger28 Sep 2023 17:59 UTC
11 points
28 comments14 min readLW link

AI pause/​gov­er­nance ad­vo­cacy might be net-nega­tive, es­pe­cially with­out a fo­cus on ex­plain­ing x-risk

Mikhail Samin27 Aug 2023 23:05 UTC
82 points
9 comments6 min readLW link

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
32 points
1 comment1 min readLW link

I Would Have Solved Align­ment, But I Was Wor­ried That Would Ad­vance Timelines

307th20 Oct 2023 16:37 UTC
118 points
33 comments9 min readLW link

List #1: Why stop­ping the de­vel­op­ment of AGI is hard but doable

Remmelt24 Dec 2022 9:52 UTC
6 points
11 comments5 min readLW link

List #2: Why co­or­di­nat­ing to al­ign as hu­mans to not de­velop AGI is a lot eas­ier than, well… co­or­di­nat­ing as hu­mans with AGI co­or­di­nat­ing to be al­igned with humans

Remmelt24 Dec 2022 9:53 UTC
1 point
0 comments3 min readLW link

Ac­cu­rate Models of AI Risk Are Hyper­ex­is­ten­tial Exfohazards

Thane Ruthenis25 Dec 2022 16:50 UTC
31 points
38 comments9 min readLW link

How ‘Hu­man-Hu­man’ dy­nam­ics give way to ‘Hu­man-AI’ and then ‘AI-AI’ dynamics

27 Dec 2022 3:16 UTC
−2 points
5 comments2 min readLW link
(mflb.com)

In­sti­tu­tions Can­not Res­train Dark-Triad AI Exploitation

27 Dec 2022 10:34 UTC
5 points
0 comments5 min readLW link
(mflb.com)

How harm­ful are im­prove­ments in AI? + Poll

15 Feb 2022 18:16 UTC
15 points
4 comments8 min readLW link

Nine Points of Col­lec­tive Insanity

27 Dec 2022 3:14 UTC
−2 points
3 comments1 min readLW link
(mflb.com)

[Question] If AI is in a bub­ble and the bub­ble bursts, what would you do?

Remmelt19 Aug 2024 10:56 UTC
12 points
13 comments1 min readLW link

Some rea­sons to start a pro­ject to stop harm­ful AI

Remmelt22 Aug 2024 16:23 UTC
5 points
0 comments2 min readLW link

Fif­teen Law­suits against OpenAI

Remmelt9 Mar 2024 12:22 UTC
27 points
4 comments1 min readLW link

An EA used de­cep­tive mes­sag­ing to ad­vance their pro­ject; we need mechanisms to avoid de­on­tolog­i­cally du­bi­ous plans

Mikhail Samin13 Feb 2024 23:15 UTC
18 points
1 comment1 min readLW link

An­thropic is be­ing sued for copy­ing books to train Claude

Remmelt31 Aug 2024 2:57 UTC
20 points
4 comments2 min readLW link
(fingfx.thomsonreuters.com)

An AI crash is our best bet for re­strict­ing AI

Remmelt11 Oct 2024 2:12 UTC
27 points
3 comments1 min readLW link

Why Stop AI is bar­ri­cad­ing OpenAI

Remmelt14 Oct 2024 7:12 UTC
−16 points
32 comments1 min readLW link
(docs.google.com)

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
17 points
15 comments1 min readLW link

Ex-OpenAI re­searcher says OpenAI mass-vi­o­lated copy­right law

Remmelt24 Oct 2024 1:00 UTC
−2 points
0 comments1 min readLW link
(suchir.net)

The 0.2 OOMs/​year target

Cleo Nardo30 Mar 2023 18:15 UTC
84 points
24 comments5 min readLW link

[Question] What’s the deal with Effec­tive Ac­cel­er­a­tionism (e/​acc)?

RomanHauksson6 Apr 2023 4:03 UTC
23 points
9 comments2 min readLW link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

Eliezer Yudkowsky8 Apr 2023 0:36 UTC
253 points
40 comments12 min readLW link

[Cross­post] Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

otto.barten8 May 2023 14:09 UTC
7 points
0 comments6 min readLW link
(forum.effectivealtruism.org)

Hori­zon­tal and Ver­ti­cal Integration

Jeffrey Heninger1 Jul 2023 1:15 UTC
17 points
1 comment2 min readLW link
No comments.