[Question] Where’s the eco­nomic in­cen­tive for wok­ism com­ing from?

ValentineDec 8, 2022, 11:28 PM
12 points
105 comments1 min readLW link

I Believe we are in a Hard­ware Overhang

nemDec 8, 2022, 11:18 PM
8 points
0 comments1 min readLW link

Of pump­kins, the Fal­con Heavy, and Grou­cho Marx: High-Level dis­course struc­ture in ChatGPT

Bill BenzonDec 8, 2022, 10:25 PM
2 points
0 comments8 min readLW link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:57 PM
4 points
5 comments14 min readLW link

AI Safety Seems Hard to Measure

HoldenKarnofskyDec 8, 2022, 7:50 PM
71 points
6 comments14 min readLW link
(www.cold-takes.com)

Play­ing shell games with definitions

weverkaDec 8, 2022, 7:35 PM
9 points
3 comments1 min readLW link

Notes on OpenAI’s al­ign­ment plan

Alex FlintDec 8, 2022, 7:13 PM
40 points
5 comments7 min readLW link

Rele­vant to nat­u­ral ab­strac­tions: Eu­clidean Sym­me­try Equiv­ar­i­ant Ma­chine Learn­ing—Overview, Ap­pli­ca­tions, and Open Questions

the gears to ascensionDec 8, 2022, 6:01 PM
8 points
0 comments1 min readLW link
(youtu.be)

I’ve started pub­lish­ing the novel I wrote to pro­mote EA

Timothy UnderwoodDec 8, 2022, 5:30 PM
10 points
3 comments1 min readLW link

Neu­ral net­works bi­ased to­wards ge­o­met­ri­cally sim­ple func­tions?

DavidHolmesDec 8, 2022, 4:16 PM
16 points
2 comments3 min readLW link

If Went­worth is right about nat­u­ral ab­strac­tions, it would be bad for alignment

Wuschel SchulzDec 8, 2022, 3:19 PM
29 points
5 comments4 min readLW link

Covid 12/​8/​22: Another Win­ter Wave

ZviDec 8, 2022, 2:40 PM
23 points
8 comments11 min readLW link
(thezvi.wordpress.com)

Why I’m Scep­ti­cal of Foom

DragonGodDec 8, 2022, 10:01 AM
20 points
36 comments3 min readLW link

Take 7: You should talk about “the hu­man’s util­ity func­tion” less.

Charlie SteinerDec 8, 2022, 8:14 AM
50 points
22 comments2 min readLW link

Ma­chine Learn­ing Consent

jefftkDec 8, 2022, 3:50 AM
38 points
14 comments3 min readLW link
(www.jefftk.com)

Riffing on the agent type

QuinnDec 8, 2022, 12:19 AM
21 points
3 comments4 min readLW link

[Question] Look­ing for ideas of pub­lic as­sets (stocks, funds, ETFs) that I can in­vest in to have a chance at prof­it­ing from the mass adop­tion and com­mer­cial­iza­tion of AI technology

AnnapurnaDec 7, 2022, 10:35 PM
15 points
9 comments1 min readLW link

A Fal­li­bil­ist Wordview

Toni MUENDELDec 7, 2022, 8:59 PM
−13 points
2 comments13 min readLW link

Thoughts on AGI or­ga­ni­za­tions and ca­pa­bil­ities work

Dec 7, 2022, 7:46 PM
102 points
17 comments5 min readLW link

How to Think About Cli­mate Models and How to Im­prove Them

clansDec 7, 2022, 7:37 PM
7 points
0 comments2 min readLW link
(locationtbd.home.blog)

The nov­elty quotient

River LewisDec 7, 2022, 5:16 PM
4 points
7 comments2 min readLW link
(heytraveler.substack.com)

ChatGPT: “An er­ror oc­curred. If this is­sue per­sists...”

Bill BenzonDec 7, 2022, 3:41 PM
5 points
11 comments3 min readLW link

Take 6: CAIS is ac­tu­ally Or­wellian.

Charlie SteinerDec 7, 2022, 1:50 PM
14 points
8 comments2 min readLW link

Peter Thiel on Tech­nolog­i­cal Stag­na­tion and Out of Touch Rationalists

Matt GoldenbergDec 7, 2022, 1:15 PM
9 points
26 comments1 min readLW link
(youtu.be)

[Link] Wave­func­tions: from Lin­ear Alge­bra to Spinors

senDec 7, 2022, 12:44 PM
11 points
12 comments1 min readLW link
(paperclip.substack.com)

Why I like Zulip in­stead of Slack or Discord

Alok SinghDec 7, 2022, 9:28 AM
31 points
10 comments1 min readLW link

Bioweapons, and ChatGPT (an­other vuln­er­a­bil­ity story)

BeeblebroxDec 7, 2022, 7:27 AM
−5 points
0 comments2 min readLW link

Where to be an AI Safety Pro­fes­sor

scasperDec 7, 2022, 7:09 AM
31 points
12 comments2 min readLW link

[Question] Are there any tools to con­vert LW se­quences to PDF or any other file for­mat?

quetzal_rainbowDec 7, 2022, 5:28 AM
2 points
2 comments1 min readLW link

Man­i­fold Mar­kets com­mu­nity meetup

Sinclair ChenDec 7, 2022, 3:25 AM
4 points
0 comments1 min readLW link

“At­ten­tion Pas­sen­gers”: not for Signs

jefftkDec 7, 2022, 2:00 AM
27 points
10 comments1 min readLW link
(www.jefftk.com)

[ASoT] Prob­a­bil­ity In­fects Con­cepts it Touches

Ulisse MiniDec 7, 2022, 1:48 AM
10 points
4 comments1 min readLW link

Sim­ple Way to Prevent Power-Seek­ing AI

research_prime_spaceDec 7, 2022, 12:26 AM
12 points
1 comment1 min readLW link

In defense of prob­a­bly wrong mechanis­tic models

evhubDec 6, 2022, 11:24 PM
55 points
10 comments2 min readLW link

AI Safety in a Vuln­er­a­ble World: Re­quest­ing Feed­back on Pre­limi­nary Thoughts

Jordan ArelDec 6, 2022, 10:35 PM
4 points
2 comments3 min readLW link

ChatGPT and the Hu­man Race

Ben ReillyDec 6, 2022, 9:38 PM
6 points
1 comment3 min readLW link

[Question] How do finite fac­tored sets com­pare with phase space?

Alex_AltairDec 6, 2022, 8:05 PM
15 points
1 comment1 min readLW link

Mesa-Op­ti­miz­ers via Grokking

orthonormalDec 6, 2022, 8:05 PM
36 points
4 comments6 min readLW link

Us­ing GPT-Eliezer against ChatGPT Jailbreaking

Dec 6, 2022, 7:54 PM
170 points
85 comments9 min readLW link

The Parable of the Crimp

PhosphorousDec 6, 2022, 6:41 PM
11 points
3 comments3 min readLW link

The Cat­e­gor­i­cal Im­per­a­tive Obscures

Gordon Seidoh WorleyDec 6, 2022, 5:48 PM
17 points
17 comments2 min readLW link

MIRI’s “Death with Dig­nity” in 60 sec­onds.

Cleo NardoDec 6, 2022, 5:18 PM
58 points
4 comments1 min readLW link

Things roll downhill

awenonianDec 6, 2022, 3:27 PM
19 points
0 comments1 min readLW link

EA & LW Fo­rums Weekly Sum­mary (28th Nov − 4th Dec 22′)

Zoe WilliamsDec 6, 2022, 9:38 AM
10 points
1 commentLW link

Take 5: Another prob­lem for nat­u­ral ab­strac­tions is laz­i­ness.

Charlie SteinerDec 6, 2022, 7:00 AM
31 points
4 comments3 min readLW link

Ver­ifi­ca­tion Is Not Easier Than Gen­er­a­tion In General

johnswentworthDec 6, 2022, 5:20 AM
71 points
27 comments1 min readLW link

Shh, don’t tell the AI it’s likely to be evil

naterushDec 6, 2022, 3:35 AM
19 points
9 comments1 min readLW link

[Question] What are the ma­jor un­der­ly­ing di­vi­sions in AI safety?

Chris_LeongDec 6, 2022, 3:28 AM
5 points
2 comments1 min readLW link

[Link] Why I’m op­ti­mistic about OpenAI’s al­ign­ment approach

janleikeDec 5, 2022, 10:51 PM
98 points
15 comments1 min readLW link
(aligned.substack.com)

The No Free Lunch the­o­rem for dummies

Steven ByrnesDec 5, 2022, 9:46 PM
37 points
16 comments3 min readLW link