“Ru­de­ness”, a use­ful co­or­di­na­tion mechanic

RaemonNov 11, 2022, 10:27 PM
49 points
20 comments2 min readLW link

In­ter­nal­iz­ing the dam­age of bad-act­ing part­ners cre­ates in­cen­tives for due diligence

tailcalledNov 11, 2022, 8:57 PM
17 points
7 comments1 min readLW link

Spec­u­la­tion on Cur­rent Op­por­tu­ni­ties for Unusu­ally High Im­pact in Global Health

johnswentworthNov 11, 2022, 8:47 PM
114 points
31 comments4 min readLW link

[Question] Is acausal ex­tor­tion pos­si­ble?

sisyphusNov 11, 2022, 7:48 PM
−20 points
35 comments3 min readLW link

Cathar­sis in Bb

jefftkNov 11, 2022, 5:40 PM
6 points
0 comments1 min readLW link
(www.jefftk.com)

In­stru­men­tal con­ver­gence is what makes gen­eral in­tel­li­gence possible

tailcalledNov 11, 2022, 4:38 PM
105 points
11 comments4 min readLW link

Weekly Roundup #5

ZviNov 11, 2022, 4:20 PM
33 points
0 comments6 min readLW link
(thezvi.wordpress.com)

Charg­ing for the Dharma

jchanNov 11, 2022, 2:02 PM
32 points
18 comments5 min readLW link

[Question] EA (& AI Safety) has over­es­ti­mated its pro­jected fund­ing — which de­ci­sions must be re­vised?

Cleo NardoNov 11, 2022, 1:50 PM
22 points
7 comments1 min readLW link
(forum.effectivealtruism.org)

Where the log­i­cal fal­lacy is not (Gen­er­al­iza­tion From Fic­tional Ev­i­dence)

banevNov 11, 2022, 10:41 AM
−12 points
14 comments1 min readLW link

Why I’m Work­ing On Model Ag­nos­tic Interpretability

Jessica RumbelowNov 11, 2022, 9:24 AM
27 points
9 comments2 min readLW link

How likely are ma­lign pri­ors over ob­jec­tives? [aborted WIP]

David JohnstonNov 11, 2022, 5:36 AM
−1 points
0 comments8 min readLW link

Do Time­less De­ci­sion The­o­rists re­ject all black­mail from other Time­less De­ci­sion The­o­rists?

myrenNov 11, 2022, 12:38 AM
7 points
8 comments3 min readLW link

We must be very clear: fraud in the ser­vice of effec­tive al­tru­ism is unacceptable

evhubNov 10, 2022, 11:31 PM
42 points
56 commentsLW link

[simu­la­tion] 4chan user claiming to be the at­tor­ney hired by Google’s sen­tient chat­bot LaMDA shares wild de­tails of encounter

janusNov 10, 2022, 9:39 PM
19 points
1 comment13 min readLW link
(generative.ink)

di­v­ine carrot

Alok SinghNov 10, 2022, 8:50 PM
18 points
2 comments1 min readLW link
(alok.github.io)

Me­tac­u­lus An­nounces The Million Pre­dic­tions Hackathon

ChristianWilliamsNov 10, 2022, 8:00 PM
7 points
0 commentsLW link

The har­ness­ing of complexity

geduardoNov 10, 2022, 6:44 PM
6 points
2 comments3 min readLW link

[Question] I there a demo of “You can’t fetch the coffee if you’re dead”?

Ram RachumNov 10, 2022, 6:41 PM
8 points
9 comments1 min readLW link

Mastodon Link­ing Norms

jefftkNov 10, 2022, 3:10 PM
9 points
9 comments2 min readLW link
(www.jefftk.com)

Covid 11/​10/​22: Into the Background

ZviNov 10, 2022, 1:40 PM
31 points
5 comments4 min readLW link
(thezvi.wordpress.com)

LessWrong Poll on AGI

Niclas KupperNov 10, 2022, 1:13 PM
12 points
6 comments1 min readLW link

The op­ti­mal an­gle for a so­lar boiler is differ­ent than for a so­lar panel

Yair HalberstadtNov 10, 2022, 10:32 AM
42 points
4 comments2 min readLW link

What it’s like to dis­sect a cadaver

Alok SinghNov 10, 2022, 6:40 AM
208 points
24 comments5 min readLW link
(alok.github.io)

I Con­verted Book I of The Se­quences Into A Zoomer-Read­able Format

dkirmaniNov 10, 2022, 2:59 AM
200 points
32 comments2 min readLW link

Ad­ver­sar­ial Pri­ors: Not Pay­ing Peo­ple to Lie to You

eva_Nov 10, 2022, 2:29 AM
22 points
9 comments3 min readLW link

Is full self-driv­ing an AGI-com­plete prob­lem?

kraemahzNov 10, 2022, 2:04 AM
10 points
5 comments1 min readLW link

[Question] What are ex­am­ples of prob­lems that were caused by in­tel­li­gence, that couldn’t be solved with in­tel­li­gence?

Peter O'MalleyNov 10, 2022, 2:04 AM
1 point
2 comments1 min readLW link

Desider­ata for an Ad­ver­sar­ial Prior

ShmiNov 9, 2022, 11:45 PM
13 points
2 comments1 min readLW link

Chord Notation

jefftkNov 9, 2022, 9:30 PM
12 points
5 comments1 min readLW link
(www.jefftk.com)

[ASoT] In­stru­men­tal con­ver­gence is useful

Ulisse MiniNov 9, 2022, 8:20 PM
5 points
9 comments1 min readLW link

Me­satrans­la­tion and Metatranslation

jdpNov 9, 2022, 6:46 PM
25 points
4 comments11 min readLW link

Try­ing to Make a Treach­er­ous Mesa-Optimizer

MadHatterNov 9, 2022, 6:07 PM
95 points
14 comments4 min readLW link
(attentionspan.blog)

A caveat to the Orthog­o­nal­ity Thesis

Wuschel SchulzNov 9, 2022, 3:06 PM
38 points
10 comments2 min readLW link

Wed­nes­day South Bay Mee­tups, Novem­ber 16

Leonard ZabarskyNov 9, 2022, 2:21 AM
1 point
0 comments1 min readLW link

FTX will prob­a­bly be sold at a steep dis­count. What we know and some fore­casts on what will hap­pen next

Nathan YoungNov 9, 2022, 2:14 AM
60 points
21 commentsLW link

A first suc­cess story for Outer Align­ment: In­struc­tGPT

Noosphere89Nov 8, 2022, 10:52 PM
6 points
1 comment1 min readLW link
(openai.com)

Try­ing Mastodon

jefftkNov 8, 2022, 7:10 PM
12 points
4 comments1 min readLW link
(www.jefftk.com)

In­verse scal­ing can be­come U-shaped

Edouard HarrisNov 8, 2022, 7:04 PM
27 points
15 comments1 min readLW link
(arxiv.org)

Peo­ple care about each other even though they have im­perfect mo­ti­va­tional poin­t­ers?

TurnTroutNov 8, 2022, 6:15 PM
33 points
25 comments7 min readLW link

Ap­ply­ing su­per­in­tel­li­gence with­out col­lu­sion

Eric DrexlerNov 8, 2022, 6:08 PM
109 points
63 comments4 min readLW link

[Question] Bi­nance is buy­ing FTX.com: How did it hap­pen and what are the im­pli­ca­tions?

CaeruleanNov 8, 2022, 5:14 PM
16 points
6 comments1 min readLW link

Some ad­vice on in­de­pen­dent research

Marius HobbhahnNov 8, 2022, 2:46 PM
56 points
5 comments10 min readLW link

Mys­ter­ies of mode collapse

janusNov 8, 2022, 10:37 AM
284 points
57 comments14 min readLW link1 review

[ASoT] Thoughts on GPT-N

Ulisse MiniNov 8, 2022, 7:14 AM
8 points
0 comments1 min readLW link

Michael Simm—In­tro­duc­ing Myself

Michael SimmNov 8, 2022, 5:45 AM
4 points
0 comments2 min readLW link

EA & LW Fo­rums Weekly Sum­mary (31st Oct − 6th Nov 22′)

Zoe WilliamsNov 8, 2022, 3:58 AM
12 points
1 commentLW link

[Question] Value of Query­ing 100+ Peo­ple About Hu­man­ity’s Future

T431Nov 8, 2022, 12:41 AM
9 points
3 comments2 min readLW link

How could we know that an AGI sys­tem will have good con­se­quences?

So8resNov 7, 2022, 10:42 PM
111 points
25 comments5 min readLW link

A Walk­through of In­ter­pretabil­ity in the Wild (w/​ au­thors Kevin Wang, Arthur Conmy & Alexan­dre Variengien)

Neel NandaNov 7, 2022, 10:39 PM
30 points
15 comments3 min readLW link
(youtu.be)