As­tralCodexTen and Ra­tion­al­ity Meetup Or­ganisers’ Re­treat — Europe, Mid­dle East, and Africa 2023

Sam F. BrownSep 15, 2022, 10:38 PM
25 points
2 comments2 min readLW link
(www.rationalitymeetups.org)

A mar­ket is a neu­ral network

David Hugh-JonesSep 15, 2022, 9:53 PM
7 points
4 comments8 min readLW link

Un­der­stand­ing Con­jec­ture: Notes from Con­nor Leahy interview

Orpheus16Sep 15, 2022, 6:37 PM
107 points
23 comments15 min readLW link

How should Deep­Mind’s Chin­chilla re­vise our AI fore­casts?

Cleo NardoSep 15, 2022, 5:54 PM
35 points
12 comments13 min readLW link

Ra­tional An­i­ma­tions’ Script Writ­ing Contest

WriterSep 15, 2022, 4:56 PM
23 points
1 comment3 min readLW link

Covid 9/​15/​22: Per­ma­nent Normal

ZviSep 15, 2022, 4:00 PM
32 points
9 comments20 min readLW link
(thezvi.wordpress.com)

[Question] Are Hu­man Brains Univer­sal?

DragonGodSep 15, 2022, 3:15 PM
16 points
28 comments5 min readLW link

In­tel­li­gence failures and a the­ory of change for forecasting

NathanBarnardSep 15, 2022, 3:02 PM
5 points
0 comments10 min readLW link

Why de­cep­tive al­ign­ment mat­ters for AGI safety

Marius HobbhahnSep 15, 2022, 1:38 PM
68 points
13 comments13 min readLW link

FDT defects in a re­al­is­tic Twin Pri­son­ers’ Dilemma

SMKSep 15, 2022, 8:55 AM
38 points
1 comment26 min readLW link

[Question] What’s the longest a sen­tient ob­server could sur­vive in the Dark Era?

RaemonSep 15, 2022, 8:43 AM
33 points
15 comments1 min readLW link

The Value of Not Be­ing an Imposter

sudoSep 15, 2022, 8:32 AM
5 points
0 comments1 min readLW link

Ca­pa­bil­ity and Agency as Corner­stones of AI risk ­— My cur­rent model

wilmSep 15, 2022, 8:25 AM
10 points
4 comments12 min readLW link

Gen­eral ad­vice for tran­si­tion­ing into The­o­ret­i­cal AI Safety

Martín SotoSep 15, 2022, 5:23 AM
12 points
0 comments10 min readLW link

Se­quenc­ing In­tro II: Adapters

jefftkSep 15, 2022, 3:30 AM
12 points
0 comments2 min readLW link
(www.jefftk.com)

[Question] How do I find tu­tors for ob­scure skills/​sub­jects (i.e. fermi es­ti­ma­tion tu­tors)

joraineSep 15, 2022, 1:15 AM
11 points
2 comments1 min readLW link

[Question] Fore­cast­ing thread: How does AI risk level vary based on timelines?

eliflandSep 14, 2022, 11:56 PM
34 points
7 comments1 min readLW link

Co­or­di­nate-Free In­ter­pretabil­ity Theory

johnswentworthSep 14, 2022, 11:33 PM
52 points
16 comments5 min readLW link

Progress links and tweets, 2022-09-14

jasoncrawfordSep 14, 2022, 11:21 PM
9 points
2 comments1 min readLW link
(rootsofprogress.org)

Effec­tive al­tru­ism in the gar­den of ends

Tyler AltermanSep 14, 2022, 10:02 PM
24 points
1 comment27 min readLW link

The prob­lem with the me­dia pre­sen­ta­tion of “be­liev­ing in AI”

Roman LeventovSep 14, 2022, 9:05 PM
3 points
0 comments1 min readLW link

See­ing the Schema

vitaliyaSep 14, 2022, 8:45 PM
23 points
6 comments1 min readLW link

Re­spond­ing to ‘Beyond Hyper­an­thro­po­mor­phism’

ukc10014Sep 14, 2022, 8:37 PM
9 points
0 comments16 min readLW link

When is in­tent al­ign­ment suffi­cient or nec­es­sary to re­duce AGI con­flict?

Sep 14, 2022, 7:39 PM
40 points
0 comments9 min readLW link

When would AGIs en­gage in con­flict?

Sep 14, 2022, 7:38 PM
52 points
5 comments13 min readLW link

When does tech­ni­cal work to re­duce AGI con­flict make a differ­ence?: Introduction

Sep 14, 2022, 7:38 PM
52 points
3 comments6 min readLW link

ACT-1: Trans­former for Actions

Daniel KokotajloSep 14, 2022, 7:09 PM
52 points
4 comments1 min readLW link
(www.adept.ai)

Renor­mal­iza­tion: Why Big­ger is Simpler

tailcalledSep 14, 2022, 5:52 PM
30 points
5 comments1 min readLW link
(www.youtube.com)

Guessti­mate Al­gorithm for Med­i­cal Research

ElizabethSep 14, 2022, 5:30 PM
26 points
0 comments7 min readLW link
(acesounderglass.com)

Pre­cise P(doom) isn’t very im­por­tant for pri­ori­ti­za­tion or strategy

harsimonySep 14, 2022, 5:19 PM
14 points
6 comments1 min readLW link

Tran­shu­man­ism, ge­netic en­g­ineer­ing, and the biolog­i­cal ba­sis of in­tel­li­gence.

fowlertmSep 14, 2022, 3:55 PM
41 points
23 comments1 min readLW link

What would hap­pen if we abol­ished the FDA to­mor­row?

Yair HalberstadtSep 14, 2022, 3:22 PM
19 points
15 comments4 min readLW link

Emily Brontë on: Psy­chol­ogy Re­quired for Se­ri­ous™ AGI Safety Research

robertzkSep 14, 2022, 2:47 PM
2 points
0 comments1 min readLW link

The Defen­der’s Ad­van­tage of Interpretability

Marius HobbhahnSep 14, 2022, 2:05 PM
41 points
4 comments6 min readLW link

[Question] Why Do Peo­ple Think Hu­mans Are Stupid?

DragonGodSep 14, 2022, 1:55 PM
22 points
41 comments3 min readLW link

[Question] Are Speed Su­per­in­tel­li­gences Fea­si­ble for Modern ML Tech­niques?

DragonGodSep 14, 2022, 12:59 PM
9 points
7 comments1 min readLW link

[Question] Would a Misal­igned SSI Really Kill Us All?

DragonGodSep 14, 2022, 12:15 PM
6 points
7 comments6 min readLW link

Some ideas for epis­tles to the AI ethicists

Charlie SteinerSep 14, 2022, 9:07 AM
19 points
0 comments4 min readLW link

Git Re-Basin: Merg­ing Models mod­ulo Per­mu­ta­tion Sym­me­tries [Linkpost]

aogSep 14, 2022, 8:55 AM
21 points
0 comments2 min readLW link
(arxiv.org)

Dan Luu on Fu­tur­ist Predictions

RobertMSep 14, 2022, 3:01 AM
50 points
9 comments5 min readLW link
(danluu.com)

Sim­ple 5x5 Go

jefftkSep 14, 2022, 2:00 AM
18 points
3 comments1 min readLW link
(www.jefftk.com)

I’m tak­ing a course on game the­ory and am faced with this ques­tion. What’s the ra­tio­nal de­ci­sion?

Dalton MaberySep 14, 2022, 12:27 AM
0 points
12 comments1 min readLW link

Twin Cities ACX Meetup—Oct 2022

Timothy M.Sep 13, 2022, 10:38 PM
1 point
2 comments1 min readLW link

Try­ing to find the un­der­ly­ing struc­ture of com­pu­ta­tional systems

Matthias G. MayerSep 13, 2022, 9:16 PM
18 points
9 comments4 min readLW link

Risk aver­sion and GPT-3

casualphysicsenjoyerSep 13, 2022, 8:50 PM
1 point
0 comments1 min readLW link

Sim­ple proofs of the age of the uni­verse (or other things)

AstynaxSep 13, 2022, 6:20 PM
16 points
12 comments1 min readLW link

New tool for ex­plor­ing EA Fo­rum, LessWrong and Align­ment Fo­rum—Tree of Tags

Filip SondejSep 13, 2022, 5:33 PM
31 points
2 comments1 min readLW link

An in­ves­ti­ga­tion into when agents may be in­cen­tivized to ma­nipu­late our be­liefs.

Felix HofstätterSep 13, 2022, 5:08 PM
15 points
0 comments14 min readLW link

Deep Q-Net­works Explained

Jay BaileySep 13, 2022, 12:01 PM
58 points
8 comments20 min readLW link

Ideas of the Gaps

Q HomeSep 13, 2022, 10:55 AM
4 points
3 comments12 min readLW link