Con­ver­gence Towards World-Models: A Gears-Level Model

Thane RuthenisAug 4, 2022, 11:31 PM
38 points
1 comment13 min readLW link

Cam­bist Booking

ScrewtapeAug 4, 2022, 10:40 PM
20 points
3 comments4 min readLW link

Cal­ibra­tion Trivia

ScrewtapeAug 4, 2022, 10:31 PM
12 points
9 comments4 min readLW link

Monthly Shorts 7/​22

CelerAug 4, 2022, 10:30 PM
5 points
0 comments3 min readLW link
(keller.substack.com)

The Prag­mas­cope Idea

johnswentworthAug 4, 2022, 9:52 PM
59 points
20 comments3 min readLW link

Run­ning a Ba­sic Meetup

ScrewtapeAug 4, 2022, 9:49 PM
20 points
1 comment2 min readLW link

Fiber arts, mys­te­ri­ous do­dec­a­he­drons, and wait­ing on “Eureka!”

eukaryoteAug 4, 2022, 8:37 PM
124 points
15 comments9 min readLW link1 review
(eukaryotewritesblog.com)

[Question] Would “Man­hat­tan Pro­ject” style be benefi­cial or dele­te­ri­ous for AI Align­ment?

Valentin2026Aug 4, 2022, 7:12 PM
5 points
1 comment1 min readLW link

[Question] AI al­ign­ment: Would a lazy self-preser­va­tion in­stinct be suffi­cient?

BrainFrogAug 4, 2022, 5:53 PM
−1 points
4 comments1 min readLW link

So­cratic Duck­ing, OODA Loops, Frame-by-Frame Debugging

CFAR!DuncanAug 4, 2022, 5:44 PM
26 points
1 comment5 min readLW link

What do ML re­searchers think about AI in 2022?

KatjaGraceAug 4, 2022, 3:40 PM
221 points
33 comments3 min readLW link
(aiimpacts.org)

In­ter­pretabil­ity isn’t Free

Joel BurgetAug 4, 2022, 3:02 PM
10 points
1 comment2 min readLW link

Covid 8/​4/​22: Rebound

ZviAug 4, 2022, 11:20 AM
36 points
0 comments11 min readLW link
(thezvi.wordpress.com)

High Reli­a­bil­ity Orgs, and AI Companies

RaemonAug 4, 2022, 5:45 AM
86 points
7 comments12 min readLW link1 review

Sur­prised by ELK re­port’s coun­terex­am­ple to De­bate, IDA

Evan R. MurphyAug 4, 2022, 2:12 AM
18 points
0 comments5 min readLW link

Clap­ping Lower

jefftkAug 4, 2022, 2:10 AM
38 points
7 comments1 min readLW link
(www.jefftk.com)

[Question] How do I know if my first post should be a post, or a ques­tion?

Nathan1123Aug 4, 2022, 1:46 AM
3 points
4 comments1 min readLW link

Three pillars for avoid­ing AGI catas­tro­phe: Tech­ni­cal al­ign­ment, de­ploy­ment de­ci­sions, and coordination

LintzAAug 3, 2022, 11:15 PM
24 points
0 comments11 min readLW link

Pre­cur­sor check­ing for de­cep­tive alignment

evhubAug 3, 2022, 10:56 PM
24 points
0 comments14 min readLW link

Trans­former lan­guage mod­els are do­ing some­thing more general

NumendilAug 3, 2022, 9:13 PM
53 points
6 comments2 min readLW link

[Question] Some doubts about Non Su­per­in­tel­li­gent AIs

aditya malikAug 3, 2022, 7:55 PM
0 points
4 comments1 min readLW link

An­nounc­ing Squig­gle: Early Access

ozziegooenAug 3, 2022, 7:48 PM
51 points
7 comments7 min readLW link
(forum.effectivealtruism.org)

Sur­vey: What (de)mo­ti­vates you about AI risk?

Daniel_FriedrichAug 3, 2022, 7:17 PM
1 point
0 comments1 min readLW link
(forms.gle)

Ex­ter­nal­ized rea­son­ing over­sight: a re­search di­rec­tion for lan­guage model alignment

tameraAug 3, 2022, 12:03 PM
135 points
23 comments6 min readLW link

Open & Wel­come Thread—Aug/​Sep 2022

ThomasAug 3, 2022, 10:22 AM
9 points
32 comments1 min readLW link

[Question] How does one rec­og­nize in­for­ma­tion and differ­en­ti­ate it from noise?

M. Y. ZuoAug 3, 2022, 3:57 AM
4 points
29 comments1 min readLW link

Law-Fol­low­ing AI 4: Don’t Rely on Vi­car­i­ous Liability

CullenAug 2, 2022, 11:26 PM
5 points
2 comments3 min readLW link

Two-year up­date on my per­sonal AI timelines

Ajeya CotraAug 2, 2022, 11:07 PM
293 points
60 comments16 min readLW link

What are the Red Flags for Neu­ral Net­work Suffer­ing? - Seeds of Science call for reviewers

rogersbaconAug 2, 2022, 10:37 PM
24 points
6 comments1 min readLW link

Againstness

CFAR!DuncanAug 2, 2022, 7:29 PM
50 points
8 comments9 min readLW link

(Sum­mary) Se­quence High­lights—Think­ing Bet­ter on Purpose

qazzquimbyAug 2, 2022, 5:45 PM
33 points
3 comments11 min readLW link

Progress links and tweets, 2022-08-02

jasoncrawfordAug 2, 2022, 5:03 PM
9 points
0 comments1 min readLW link
(rootsofprogress.org)

[Question] I want to donate some money (not much, just what I can af­ford) to AGI Align­ment re­search, to what­ever or­ga­ni­za­tion has the best chance of mak­ing sure that AGI goes well and doesn’t kill us all. What are my best op­tions, where can I make the most differ­ence per dol­lar?

lumenwritesAug 2, 2022, 12:08 PM
15 points
9 comments1 min readLW link

Think­ing with­out pri­ors?

Q HomeAug 2, 2022, 9:17 AM
7 points
0 comments9 min readLW link

[Question] Would quan­tum im­mor­tal­ity mean sub­jec­tive im­mor­tal­ity?

n0ahAug 2, 2022, 4:54 AM
2 points
10 comments1 min readLW link

Turbocharging

CFAR!DuncanAug 2, 2022, 12:01 AM
52 points
4 comments9 min readLW link

Let­ter from lead­ing Soviet Aca­demi­ci­ans to party and gov­ern­ment lead­ers of the Soviet Union re­gard­ing signs of de­cline and struc­tural prob­lems of the eco­nomic-poli­ti­cal sys­tem (1970)

M. Y. ZuoAug 1, 2022, 10:35 PM
20 points
10 comments16 min readLW link

Tech­ni­cal AI Align­ment Study Group

Eric KAug 1, 2022, 6:33 PM
5 points
0 comments1 min readLW link

[Question] Is there any writ­ing about prompt en­g­ineer­ing for hu­mans?

Alex HollowAug 1, 2022, 12:52 PM
18 points
8 comments1 min readLW link

Med­i­ta­tion course claims 65% en­light­en­ment rate: my review

KatWoodsAug 1, 2022, 11:25 AM
111 points
35 comments14 min readLW link

[Question] Which in­tro-to-AI-risk text would you recom­mend to...

SherrinfordAug 1, 2022, 9:36 AM
12 points
1 comment1 min readLW link

Po­laris, Five-Se­cond Ver­sions, and Thought Lengths

CFAR!DuncanAug 1, 2022, 7:14 AM
50 points
12 comments8 min readLW link

A Word is Worth 1,000 Pictures

KullyAug 1, 2022, 4:08 AM
1 point
0 comments2 min readLW link

On akra­sia: start­ing at the bottom

seecrowAug 1, 2022, 4:08 AM
37 points
2 comments3 min readLW link

[Question] How likely do you think worse-than-ex­tinc­tion type fates to be?

span1Aug 1, 2022, 4:08 AM
3 points
3 comments1 min readLW link

Ab­strac­tion sac­ri­fices causal clarity

Marv KJul 31, 2022, 7:24 PM
2 points
0 comments3 min readLW link

Time-log­ging pro­grams and/​or spread­sheets (2022)

mikbpJul 31, 2022, 6:18 PM
3 points
3 comments1 min readLW link

Con­ser­vatism is a ra­tio­nal re­sponse to epistemic uncertainty

contrarianbritJul 31, 2022, 6:04 PM
2 points
11 comments9 min readLW link
(thomasprosser.substack.com)

South Bay ACX/​LW Meetup

ISJul 31, 2022, 3:30 PM
2 points
0 comments1 min readLW link

Per­verse In­de­pen­dence Incentives

jefftkJul 31, 2022, 2:40 PM
61 points
3 comments1 min readLW link
(www.jefftk.com)