GPTs are Pre­dic­tors, not Imitators

Eliezer Yudkowsky8 Apr 2023 19:59 UTC
403 points
91 comments3 min readLW link

LW Team is ad­just­ing mod­er­a­tion policy

Raemon4 Apr 2023 20:41 UTC
304 points
185 comments3 min readLW link

Hooray for step­ping out of the limelight

So8res1 Apr 2023 2:45 UTC
282 points
24 comments1 min readLW link

Notes on Teach­ing in Prison

jsd19 Apr 2023 1:53 UTC
274 points
13 comments12 min readLW link

[SEE NEW EDITS] No, *You* Need to Write Clearer

Nicholas / Heather Kross29 Apr 2023 5:04 UTC
261 points
65 comments5 min readLW link
(www.thinkingmuchbetter.com)

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

Eliezer Yudkowsky8 Apr 2023 0:36 UTC
253 points
40 comments12 min readLW link

My Assess­ment of the Chi­nese AI Safety Community

Lao Mein25 Apr 2023 4:21 UTC
248 points
94 comments3 min readLW link

On AutoGPT

Zvi13 Apr 2023 12:30 UTC
248 points
47 comments20 min readLW link
(thezvi.wordpress.com)

My views on “doom”

paulfchristiano27 Apr 2023 17:50 UTC
245 points
35 comments2 min readLW link
(ai-alignment.com)

Policy dis­cus­sions fol­low strong con­tex­tu­al­iz­ing norms

Richard_Ngo1 Apr 2023 23:51 UTC
230 points
61 comments3 min readLW link

Catch­ing the Eye of Sauron

Casey B.7 Apr 2023 0:40 UTC
221 points
68 comments4 min readLW link

Orthog­o­nal: A new agent foun­da­tions al­ign­ment organization

Tamsin Leake19 Apr 2023 20:17 UTC
216 points
4 comments1 min readLW link
(orxl.org)

Eliezer Yud­kowsky’s Let­ter in Time Magazine

Zvi5 Apr 2023 18:00 UTC
212 points
86 comments14 min readLW link
(thezvi.wordpress.com)

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:43 UTC
205 points
62 comments15 min readLW link

If in­ter­pretabil­ity re­search goes well, it may get dangerous

So8res3 Apr 2023 21:48 UTC
200 points
11 comments2 min readLW link

Gi­ant (In)scrutable Ma­tri­ces: (Maybe) the Best of All Pos­si­ble Worlds

1a3orn4 Apr 2023 17:39 UTC
196 points
37 comments5 min readLW link

The ‘ pe­ter­todd’ phenomenon

mwatkins15 Apr 2023 0:59 UTC
192 points
49 comments38 min readLW link

Tran­script and Brief Re­sponse to Twit­ter Con­ver­sa­tion be­tween Yann LeCunn and Eliezer Yudkowsky

Zvi26 Apr 2023 13:10 UTC
190 points
51 comments10 min readLW link
(thezvi.wordpress.com)

The ba­sic rea­sons I ex­pect AGI ruin

Rob Bensinger18 Apr 2023 3:37 UTC
188 points
73 comments14 min readLW link

Talk­ing pub­li­cly about AI risk

Jan_Kulveit21 Apr 2023 11:28 UTC
180 points
9 comments6 min readLW link

Killing Socrates

Duncan Sabien (Deactivated)11 Apr 2023 10:28 UTC
178 points
144 comments8 min readLW link

A re­port about LessWrong karma volatility from a differ­ent universe

Ben Pace1 Apr 2023 21:48 UTC
176 points
7 comments1 min readLW link

[April Fools’] Defini­tive con­fir­ma­tion of shard theory

TurnTrout1 Apr 2023 7:27 UTC
168 points
8 comments2 min readLW link

The Brain is Not Close to Ther­mo­dy­namic Limits on Computation

DaemonicSigil24 Apr 2023 8:21 UTC
167 points
58 comments5 min readLW link

Davi­dad’s Bold Plan for Align­ment: An In-Depth Explanation

19 Apr 2023 16:09 UTC
159 points
34 comments21 min readLW link

grey goo is unlikely

bhauth17 Apr 2023 1:59 UTC
159 points
117 comments9 min readLW link
(bhauth.com)

Agen­tized LLMs will change the al­ign­ment landscape

Seth Herd9 Apr 2023 2:29 UTC
157 points
97 comments3 min readLW link

AI doom from an LLM-plateau-ist perspective

Steven Byrnes27 Apr 2023 13:58 UTC
157 points
24 comments6 min readLW link

A fresh­man year dur­ing the AI midgame: my ap­proach to the next year

Buck14 Apr 2023 0:38 UTC
152 points
14 comments1 min readLW link

AI x-risk, ap­prox­i­mately or­dered by embarrassment

Alex Lawsen 12 Apr 2023 23:01 UTC
151 points
7 comments19 min readLW link

Shut­ting down AI is not enough. We need to de­stroy all tech­nol­ogy.

Matthew Barnett1 Apr 2023 21:03 UTC
148 points
36 comments1 min readLW link

Could a su­per­in­tel­li­gence de­duce gen­eral rel­a­tivity from a fal­ling ap­ple? An investigation

titotal23 Apr 2023 12:49 UTC
147 points
39 comments9 min readLW link

The self-un­al­ign­ment problem

14 Apr 2023 12:10 UTC
146 points
24 comments10 min readLW link

Con­sider The Hand Axe

ymeskhout8 Apr 2023 1:31 UTC
142 points
16 comments6 min readLW link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

11 Apr 2023 17:30 UTC
141 points
11 comments1 min readLW link

Four mind­set dis­agree­ments be­hind ex­is­ten­tial risk dis­agree­ments in ML

Rob Bensinger11 Apr 2023 4:53 UTC
136 points
12 comments1 min readLW link

The Learn­ing-The­o­retic Agenda: Sta­tus 2023

Vanessa Kosoy19 Apr 2023 5:21 UTC
135 points
13 comments55 min readLW link

Tun­ing your Cog­ni­tive Strategies

Raemon27 Apr 2023 20:32 UTC
132 points
57 comments9 min readLW link
(bewelltuned.com)

AI Sum­mer Harvest

Cleo Nardo4 Apr 2023 3:35 UTC
130 points
10 comments1 min readLW link

But why would the AI kill us?

So8res17 Apr 2023 18:42 UTC
129 points
95 comments2 min readLW link

Mis­gen­er­al­iza­tion as a misnomer

So8res6 Apr 2023 20:43 UTC
129 points
22 comments4 min readLW link

$250 prize for check­ing Jake Can­nell’s Brain Efficiency

Alexander Gietelink Oldenziel26 Apr 2023 16:21 UTC
123 points
170 comments2 min readLW link

[New LW Fea­ture] “De­bates”

1 Apr 2023 7:00 UTC
121 points
35 comments1 min readLW link

Good­hart’s Law in­side the hu­man mind

Kaj_Sotala17 Apr 2023 13:48 UTC
117 points
13 comments16 min readLW link

Deep learn­ing mod­els might be se­cretly (al­most) linear

beren24 Apr 2023 18:43 UTC
117 points
29 comments4 min readLW link

Fi­nan­cial Times: We must slow down the race to God-like AI

trevor13 Apr 2023 19:55 UTC
112 points
17 comments16 min readLW link
(www.ft.com)

How could you pos­si­bly choose what an AI wants?

So8res19 Apr 2023 17:08 UTC
105 points
19 comments1 min readLW link

Should we pub­lish mechanis­tic in­ter­pretabil­ity re­search?

21 Apr 2023 16:19 UTC
105 points
40 comments13 min readLW link

Shap­ley Value At­tri­bu­tion in Chain of Thought

leogao14 Apr 2023 5:56 UTC
103 points
7 comments4 min readLW link

10 rea­sons why lists of 10 rea­sons might be a win­ning strategy

trevor6 Apr 2023 21:24 UTC
101 points
7 comments1 min readLW link