RSS

Public Re­ac­tions to AI

TagLast edit: 22 Mar 2023 5:00 UTC by Ruby

For posts that describe or link to notable reactions to AI developments, e.g. Bill Gates sharing his current beliefs about AI.

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
106 points
53 comments7 min readLW link

Helping your Se­na­tor Pre­pare for the Up­com­ing Sam Alt­man Hearing

Tiago de Vassal14 May 2023 22:45 UTC
69 points
2 comments1 min readLW link
(aisafetytour.com)

Some quotes from Tues­day’s Se­nate hear­ing on AI

Daniel_Eth17 May 2023 12:13 UTC
66 points
9 comments1 min readLW link

Eliezer Yud­kowsky’s Let­ter in Time Magazine

Zvi5 Apr 2023 18:00 UTC
212 points
86 comments14 min readLW link
(thezvi.wordpress.com)

Talk­ing pub­li­cly about AI risk

Jan_Kulveit21 Apr 2023 11:28 UTC
180 points
9 comments6 min readLW link

Chat­bot con­vinces Bel­gian to com­mit suicide

Jeroen De Ryck28 Mar 2023 18:14 UTC
60 points
18 comments3 min readLW link
(www.standaard.be)

[Question] Is this true? paulg: [One spe­cial thing about AI risk is that peo­ple who un­der­stand AI well are more wor­ried than peo­ple who un­der­stand it poorly]

tailcalled1 Apr 2023 11:59 UTC
25 points
5 comments1 min readLW link

New sur­vey: 46% of Amer­i­cans are con­cerned about ex­tinc­tion from AI; 69% sup­port a six-month pause in AI development

Akash5 Apr 2023 1:26 UTC
46 points
9 comments1 min readLW link
(today.yougov.com)

46% of US adults at least “some­what con­cerned” about AI ex­tinc­tion risk.

Foyle5 Apr 2023 5:25 UTC
1 point
0 comments1 min readLW link

Eliezer on The Lu­nar So­ciety podcast

Max H6 Apr 2023 16:18 UTC
40 points
5 comments1 min readLW link
(www.dwarkeshpatel.com)

Is it true that only a chat­bot en­couraged a man to com­mit suicide?

Jeroen De Ryck6 Apr 2023 14:10 UTC
6 points
0 comments4 min readLW link
(www.vrt.be)

Yoshua Ben­gio: “Slow­ing down de­vel­op­ment of AI sys­tems pass­ing the Tur­ing test”

Roman Leventov6 Apr 2023 3:31 UTC
49 points
2 comments5 min readLW link
(yoshuabengio.org)

Ng and LeCun on the 6-Month Pause (Tran­script)

Stephen Fowler9 Apr 2023 6:14 UTC
29 points
7 comments16 min readLW link

AI Safety Newslet­ter #1 [CAIS Linkpost]

10 Apr 2023 20:18 UTC
45 points
0 comments4 min readLW link
(newsletter.safe.ai)

Fi­nan­cial Times: We must slow down the race to God-like AI

trevor13 Apr 2023 19:55 UTC
112 points
17 comments16 min readLW link
(www.ft.com)

NYT: A Con­ver­sa­tion With Bing’s Chat­bot Left Me Deeply Unsettled

trevor16 Feb 2023 22:57 UTC
53 points
5 comments7 min readLW link
(www.nytimes.com)

Many im­por­tant tech­nolo­gies start out as sci­ence fic­tion be­fore be­com­ing real

trevor10 Feb 2023 9:36 UTC
28 points
2 comments2 min readLW link

NYT: The Sur­pris­ing Thing A.I. Eng­ineers Will Tell You if You Let Them

Sodium17 Apr 2023 18:59 UTC
11 points
2 comments1 min readLW link
(www.nytimes.com)

Max Teg­mark’s new Time ar­ti­cle on how we’re in a Don’t Look Up sce­nario [Linkpost]

Jonas Hallgren25 Apr 2023 15:41 UTC
39 points
9 comments1 min readLW link
(time.com)

My thoughts on the so­cial re­sponse to AI risk

Matthew Barnett1 Nov 2023 21:17 UTC
156 points
37 comments10 min readLW link

A brief col­lec­tion of Hin­ton’s re­cent com­ments on AGI risk

Kaj_Sotala4 May 2023 23:31 UTC
143 points
9 comments11 min readLW link

Linkpost for Ac­cursed Farms Dis­cus­sion /​ de­bate with AI ex­pert Eliezer Yudkowsky

gilch5 May 2023 18:20 UTC
14 points
2 comments1 min readLW link
(www.youtube.com)

[Cross­post] Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

otto.barten8 May 2023 14:09 UTC
7 points
0 comments6 min readLW link
(forum.effectivealtruism.org)

Ama­zon KDP AI con­tent guidelines

ChristianKl11 Sep 2023 18:36 UTC
12 points
0 comments1 min readLW link

Tran­script: NBC Nightly News: AI ‘race to reck­less­ness’ w/​ Tris­tan Har­ris, Aza Raskin

WilliamKiely23 Mar 2023 1:04 UTC
63 points
4 comments3 min readLW link

The Over­ton Win­dow widens: Ex­am­ples of AI risk in the media

Akash23 Mar 2023 17:10 UTC
107 points
24 comments6 min readLW link

Dat­a­point: me­dian 10% AI x-risk men­tioned on Dutch pub­lic TV channel

Chris van Merwijk26 Mar 2023 12:50 UTC
17 points
1 comment1 min readLW link

Ge­offrey Hin­ton—Full “not in­con­ceiv­able” quote

WilliamKiely28 Mar 2023 0:22 UTC
21 points
2 comments2 min readLW link

Elon talked with se­nior Chi­nese lead­er­ship about AI X-risk

ChristianKl7 Jun 2023 15:02 UTC
47 points
2 comments1 min readLW link
(www.youtube.com)

EY in the New York Times

Blueberry10 Jun 2023 12:21 UTC
6 points
14 comments1 min readLW link
(www.nytimes.com)

UK PM: $125M for AI safety

Hauke Hillebrandt12 Jun 2023 12:33 UTC
31 points
11 comments1 min readLW link
(twitter.com)

UK Foun­da­tion Model Task Force—Ex­pres­sion of Interest

ojorgensen18 Jun 2023 9:43 UTC
64 points
2 comments1 min readLW link
(twitter.com)

News : Bi­den-⁠Har­ris Ad­minis­tra­tion Se­cures Vol­un­tary Com­mit­ments from Lead­ing Ar­tifi­cial In­tel­li­gence Com­pa­nies to Man­age the Risks Posed by AI

Jonathan Claybrough21 Jul 2023 18:00 UTC
65 points
10 comments2 min readLW link
(www.whitehouse.gov)

The “pub­lic de­bate” about AI is con­fus­ing for the gen­eral pub­lic and for poli­cy­mak­ers be­cause it is a three-sided de­bate

Adam David Long1 Aug 2023 0:08 UTC
146 points
30 comments4 min readLW link

Steven Wolfram on AI Alignment

Bill Benzon20 Aug 2023 19:49 UTC
66 points
15 comments4 min readLW link

[Linkpost] GatesNotes: The Age of AI has begun

WilliamKiely22 Mar 2023 4:20 UTC
19 points
9 comments1 min readLW link

What Yann LeCun gets wrong about al­ign­ing AI (video)

blake808618 May 2023 0:02 UTC
0 points
0 comments1 min readLW link
(www.youtube.com)

Rishi Su­nak men­tions “ex­is­ten­tial threats” in talk with OpenAI, Deep­Mind, An­thropic CEOs

24 May 2023 21:06 UTC
34 points
1 comment1 min readLW link
(www.gov.uk)

An­drew Ng wants to have a con­ver­sa­tion about ex­tinc­tion risk from AI

Leon Lang5 Jun 2023 22:29 UTC
32 points
2 comments1 min readLW link
(twitter.com)

PCAST Work­ing Group on Gen­er­a­tive AI In­vites Public Input

Christopher King13 May 2023 22:49 UTC
7 points
0 comments1 min readLW link
(terrytao.wordpress.com)

AI Align­ment in The New Yorker

Eleni Angelou17 May 2023 21:36 UTC
8 points
0 comments1 min readLW link
(www.newyorker.com)

The un­spo­ken but ridicu­lous as­sump­tion of AI doom: the hid­den doom assumption

Christopher King1 Jun 2023 17:01 UTC
−9 points
1 comment3 min readLW link

In fa­vor of ac­cel­er­at­ing prob­lems you’re try­ing to solve

Christopher King11 Apr 2023 18:15 UTC
2 points
2 comments4 min readLW link

[Question] What pro­jects and efforts are there to pro­mote AI safety re­search?

Christopher King24 May 2023 0:33 UTC
4 points
0 comments1 min readLW link

[Question] Is there a man­i­fold plot of all peo­ple who had a say in AI al­ign­ment?

skulk-and-quarrel31 Mar 2023 21:50 UTC
8 points
0 comments1 min readLW link

Cur­rent AI harms are also sci-fi

Christopher King8 Jun 2023 17:49 UTC
26 points
3 comments1 min readLW link

Yes, avoid­ing ex­tinc­tion from AI *is* an ur­gent pri­or­ity: a re­sponse to Seth Lazar, Jeremy Howard, and Arvind Narayanan.

Soroush Pour1 Jun 2023 13:38 UTC
17 points
0 comments5 min readLW link
(www.soroushjp.com)
No comments.