Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubK2 May 2023 22:50 UTC
10 points
0 comments1 min readLW link

A Case for the Least For­giv­ing Take On Alignment

Thane Ruthenis2 May 2023 21:34 UTC
100 points
84 comments22 min readLW link

Are Emer­gent Abil­ities of Large Lan­guage Models a Mirage? [linkpost]

Matthew Barnett2 May 2023 21:01 UTC
53 points
19 comments1 min readLW link
(arxiv.org)

Does descal­ing a ket­tle help? The­ory and practice

philh2 May 2023 20:20 UTC
35 points
25 comments8 min readLW link
(reasonableapproximation.net)

Avoid­ing xrisk from AI doesn’t mean fo­cus­ing on AI xrisk

Stuart_Armstrong2 May 2023 19:27 UTC
64 points
7 comments3 min readLW link

AI Safety Newslet­ter #4: AI and Cy­ber­se­cu­rity, Per­sua­sive AIs, Weaponiza­tion, and Ge­offrey Hin­ton talks AI risks

2 May 2023 18:41 UTC
32 points
0 comments5 min readLW link
(newsletter.safe.ai)

My best sys­tem yet: text-based pro­ject management

jt2 May 2023 17:44 UTC
6 points
8 comments5 min readLW link

[Question] What’s the state of AI safety in Ja­pan?

ChristianKl2 May 2023 17:06 UTC
5 points
1 comment1 min readLW link

Five Wor­lds of AI (by Scott Aaron­son and Boaz Barak)

mishka2 May 2023 13:23 UTC
22 points
6 comments1 min readLW link1 review
(scottaaronson.blog)

Sys­tems that can­not be un­safe can­not be safe

Davidmanheim2 May 2023 8:53 UTC
62 points
27 comments2 min readLW link

AGI safety ca­reer advice

Richard_Ngo2 May 2023 7:36 UTC
132 points
24 comments13 min readLW link

An Im­pos­si­bil­ity Proof Rele­vant to the Shut­down Prob­lem and Corrigibility

Audere2 May 2023 6:52 UTC
65 points
13 comments9 min readLW link

Some Thoughts on Virtue Ethics for AIs

peligrietzer2 May 2023 5:46 UTC
76 points
8 comments4 min readLW link

Tech­nolog­i­cal un­em­ploy­ment as an­other test for ra­tio­nal­ist winning

RomanHauksson2 May 2023 4:16 UTC
14 points
5 comments1 min readLW link

The Mo­ral Coper­ni­can Principle

Legionnaire2 May 2023 3:25 UTC
5 points
7 comments2 min readLW link

Open & Wel­come Thread—May 2023

Ruby2 May 2023 2:58 UTC
21 points
41 comments1 min readLW link

Sum­maries of top fo­rum posts (24th − 30th April 2023)

Zoe Williams2 May 2023 2:30 UTC
12 points
1 comment1 min readLW link

AXRP Epi­sode 21 - In­ter­pretabil­ity for Eng­ineers with Stephen Casper

DanielFilan2 May 2023 0:50 UTC
12 points
1 comment66 min readLW link

Get­ting Your Eyes On

LoganStrohl2 May 2023 0:33 UTC
58 points
11 comments14 min readLW link

What 2025 looks like

Ruby1 May 2023 22:53 UTC
75 points
17 comments15 min readLW link

[Question] Nat­u­ral Selec­tion vs Gra­di­ent Descent

CuriousApe111 May 2023 22:16 UTC
4 points
3 comments1 min readLW link

A[I] Zom­bie Apoca­lypse Is Already Upon Us

NickHarris1 May 2023 22:02 UTC
−6 points
4 comments2 min readLW link

Ge­off Hin­ton Quits Google

Adam Shai1 May 2023 21:03 UTC
98 points
14 comments1 min readLW link

The Ap­pren­tice Thread 2

hath1 May 2023 20:09 UTC
50 points
19 comments1 min readLW link

Bu­dapest, Hun­gary – ACX Mee­tups Every­where Spring 2023

1 May 2023 17:36 UTC
4 points
0 comments1 min readLW link

In fa­vor of steelmanning

jp1 May 2023 17:12 UTC
36 points
6 comments1 min readLW link

Shah (Deep­Mind) and Leahy (Con­jec­ture) Dis­cuss Align­ment Cruxes

1 May 2023 16:47 UTC
96 points
10 comments30 min readLW link

Dist­in­guish­ing mi­suse is difficult and uncomfortable

lemonhope1 May 2023 16:23 UTC
17 points
3 comments1 min readLW link

[Question] Does agency nec­es­sar­ily im­ply self-preser­va­tion in­stinct?

Mislav Jurić1 May 2023 16:06 UTC
5 points
8 comments1 min readLW link

What Bos­ton Can Teach Us About What a Wo­man Is

ymeskhout1 May 2023 15:34 UTC
18 points
45 comments12 min readLW link

The Rocket Align­ment Prob­lem, Part 2

Zvi1 May 2023 14:30 UTC
40 points
20 comments9 min readLW link
(thezvi.wordpress.com)

So­cial­ist Demo­cratic-Repub­lic GAME: 12 Amend­ments to the Con­sti­tu­tions of the Free World

monkymind1 May 2023 13:13 UTC
−34 points
0 comments1 min readLW link

[Question] Where is all this ev­i­dence of UFOs?

Logan Zoellner1 May 2023 12:13 UTC
29 points
42 comments1 min readLW link

LessWrong Com­mu­nity Week­end 2023 [Ap­pli­ca­tions now closed]

Henry Prowbell1 May 2023 9:31 UTC
43 points
0 comments6 min readLW link

LessWrong Com­mu­nity Week­end 2023 [Ap­pli­ca­tions now closed]

Henry Prowbell1 May 2023 9:08 UTC
89 points
0 comments6 min readLW link

[Question] In AI Risk what is the base model of the AI?

jmh1 May 2023 3:25 UTC
3 points
1 comment1 min readLW link

Hell is Game The­ory Folk Theorems

jessicata1 May 2023 3:16 UTC
81 points
102 comments5 min readLW link1 review
(unstableontology.com)

Safety stan­dards: a frame­work for AI regulation

joshc1 May 2023 0:56 UTC
19 points
0 comments8 min readLW link

neu­ron spike com­pu­ta­tional capacity

bhauth1 May 2023 0:28 UTC
16 points
0 comments2 min readLW link

Cult of Error

bayesyatina30 Apr 2023 23:33 UTC
5 points
2 comments3 min readLW link

How can one ra­tio­nally have very high or very low prob­a­bil­ities of ex­tinc­tion in a pre-paradig­matic field?

Shmi30 Apr 2023 21:53 UTC
39 points
15 comments1 min readLW link

A small up­date to the Sparse Cod­ing in­terim re­search report

30 Apr 2023 19:54 UTC
61 points
5 comments1 min readLW link

Dis­cus­sion about AI Safety fund­ing (FB tran­script)

Akash30 Apr 2023 19:05 UTC
75 points
8 comments1 min readLW link

Sup­port me in a Week-Long Pick­et­ing Cam­paign Near OpenAI’s HQ: Seek­ing Sup­port and Ideas from the LessWrong Community

Percy30 Apr 2023 17:48 UTC
−21 points
15 comments1 min readLW link

money ≠ value

stonefly30 Apr 2023 17:47 UTC
2 points
3 comments3 min readLW link

Vac­cine Poli­cies Need Updating

jefftk30 Apr 2023 17:20 UTC
11 points
0 comments1 min readLW link
(www.jefftk.com)

Fun­da­men­tal Uncer­tainty: Chap­ter 7 - Why is truth use­ful?

Gordon Seidoh Worley30 Apr 2023 16:48 UTC
10 points
3 comments10 min readLW link

Si­mu­la­tors In­crease the Like­li­hood of Align­ment by Default

Wuschel Schulz30 Apr 2023 16:32 UTC
13 points
1 comment5 min readLW link

Con­nec­tomics seems great from an AI x-risk perspective

Steven Byrnes30 Apr 2023 14:38 UTC
98 points
7 comments10 min readLW link1 review

The voy­age of novelty

TsviBT30 Apr 2023 12:52 UTC
11 points
0 comments6 min readLW link