We don’t trade with ants

KatjaGrace10 Jan 2023 23:50 UTC
269 points
108 comments7 min readLW link
(worldspiritsockpuppet.com)

[Question] Who are the peo­ple who are cur­rently prof­it­ing from in­fla­tion?

skogsnisse10 Jan 2023 21:39 UTC
1 point
2 comments1 min readLW link

Is Progress Real?

rogersbacon10 Jan 2023 17:42 UTC
5 points
14 comments14 min readLW link
(www.secretorum.life)

200 COP in MI: In­ter­pret­ing Re­in­force­ment Learning

Neel Nanda10 Jan 2023 17:37 UTC
25 points
1 comment10 min readLW link

AGI and the EMH: mar­kets are not ex­pect­ing al­igned or un­al­igned AI in the next 30 years

10 Jan 2023 16:06 UTC
117 points
44 comments26 min readLW link

The Align­ment Prob­lem from a Deep Learn­ing Per­spec­tive (ma­jor rewrite)

10 Jan 2023 16:06 UTC
84 points
8 comments39 min readLW link
(arxiv.org)

Against us­ing stock prices to fore­cast AI timelines

10 Jan 2023 16:03 UTC
23 points
2 comments2 min readLW link

Sort­ing Peb­bles Into Cor­rect Heaps: The Animation

Writer10 Jan 2023 15:58 UTC
26 points
2 comments1 min readLW link
(youtu.be)

Es­cape Ve­loc­ity from Bul­lshit Jobs

Zvi10 Jan 2023 14:30 UTC
61 points
18 comments5 min readLW link
(thezvi.wordpress.com)

Scal­ing laws vs in­di­vi­d­ual differences

beren10 Jan 2023 13:22 UTC
44 points
21 comments7 min readLW link

Notes on writing

RP10 Jan 2023 4:01 UTC
35 points
11 comments3 min readLW link

Idea: Learn­ing How To Move Towards The Metagame

Algon10 Jan 2023 0:58 UTC
10 points
3 comments1 min readLW link

Re­view AI Align­ment posts to help figure out how to make a proper AI Align­ment review

10 Jan 2023 0:19 UTC
85 points
31 comments2 min readLW link

Against the para­dox of tolerance

pchvykov10 Jan 2023 0:12 UTC
1 point
11 comments3 min readLW link

In­creased Scam Qual­ity/​Quan­tity (Hy­poth­e­sis in need of data)?

Beeblebrox9 Jan 2023 22:57 UTC
9 points
6 comments1 min readLW link

Went­worth and Larsen on buy­ing time

9 Jan 2023 21:31 UTC
73 points
6 comments12 min readLW link

EA & LW Fo­rum Sum­maries—Holi­day Edi­tion (19th Dec − 8th Jan)

Zoe Williams9 Jan 2023 21:06 UTC
11 points
0 comments1 min readLW link

GWWC Should Re­quire Public Char­ity Evaluations

jefftk9 Jan 2023 20:10 UTC
28 points
0 comments4 min readLW link
(www.jefftk.com)

[MLSN #7]: an ex­am­ple of an emer­gent in­ter­nal optimizer

9 Jan 2023 19:39 UTC
28 points
0 comments6 min readLW link

Try­ing to iso­late ob­jec­tives: ap­proaches to­ward high-level interpretability

Jozdien9 Jan 2023 18:33 UTC
48 points
14 comments8 min readLW link

The spe­cial na­ture of spe­cial relativity

adamShimi9 Jan 2023 17:30 UTC
37 points
1 comment3 min readLW link
(epistemologicalvigilance.substack.com)

Pierre Me­nard, pixel art, and entropy

Joey Marcellino9 Jan 2023 16:34 UTC
1 point
1 comment6 min readLW link

Fore­cast­ing ex­treme outcomes

AidanGoth9 Jan 2023 16:34 UTC
4 points
1 comment2 min readLW link
(docs.google.com)

Ev­i­dence un­der Ad­ver­sar­ial Conditions

PeterMcCluskey9 Jan 2023 16:21 UTC
57 points
1 comment3 min readLW link
(bayesianinvestor.com)

How to Bounded Distrust

Zvi9 Jan 2023 13:10 UTC
120 points
16 comments4 min readLW link
(thezvi.wordpress.com)

Reifi­ca­tion bias

9 Jan 2023 12:22 UTC
25 points
6 comments2 min readLW link

Big list of AI safety videos

JakubK9 Jan 2023 6:12 UTC
11 points
2 comments1 min readLW link
(docs.google.com)

Ra­tion­al­ity Prac­tice: Self-Deception

Darmani9 Jan 2023 4:07 UTC
6 points
0 comments1 min readLW link

Wolf In­ci­dent Postmortem

jefftk9 Jan 2023 3:20 UTC
134 points
13 comments1 min readLW link
(www.jefftk.com)

You’re Not One “You”—How De­ci­sion The­o­ries Are Talk­ing Past Each Other

keith_wynroe9 Jan 2023 1:21 UTC
28 points
11 comments8 min readLW link

On Blog­ging and Podcasting

DanielFilan9 Jan 2023 0:40 UTC
18 points
6 comments11 min readLW link
(danielfilan.com)

ChatGPT tells sto­ries about XP-708-DQ, Eliezer, drag­ons, dark sor­cer­esses, and un­al­igned robots be­com­ing aligned

Bill Benzon8 Jan 2023 23:21 UTC
6 points
2 comments18 min readLW link

Si­mu­lacra are Things

janus8 Jan 2023 23:03 UTC
63 points
7 comments2 min readLW link

[Question] GPT learn­ing from smarter texts?

Viliam8 Jan 2023 22:23 UTC
26 points
7 comments1 min readLW link

La­tent vari­able pre­dic­tion mar­kets mockup + de­signer request

tailcalled8 Jan 2023 22:18 UTC
25 points
4 comments1 min readLW link

Cita­bil­ity of Less­wrong and the Align­ment Forum

Leon Lang8 Jan 2023 22:12 UTC
48 points
2 comments1 min readLW link

I tried to learn as much Deep Learn­ing math as I could in 24 hours

Phosphorous8 Jan 2023 21:07 UTC
31 points
2 comments7 min readLW link

[Question] What spe­cific thing would you do with AI Align­ment Re­search As­sis­tant GPT?

quetzal_rainbow8 Jan 2023 19:24 UTC
45 points
9 comments1 min readLW link

[Question] Re­search ideas (AI In­ter­pretabil­ity & Neu­ro­sciences) for a 2-months project

flux8 Jan 2023 15:36 UTC
3 points
1 comment1 min readLW link

200 COP in MI: Image Model Interpretability

Neel Nanda8 Jan 2023 14:53 UTC
18 points
3 comments6 min readLW link

Hal­i­fax Monthly Meetup: Moloch in the HRM

Ideopunk8 Jan 2023 14:49 UTC
10 points
0 comments1 min readLW link

Dangers of deference

TsviBT8 Jan 2023 14:36 UTC
58 points
5 comments2 min readLW link

Could evolu­tion pro­duce some­thing truly al­igned with its own op­ti­miza­tion stan­dards? What would an an­swer to this mean for AI al­ign­ment?

No77e8 Jan 2023 11:04 UTC
3 points
4 comments1 min readLW link

AI psy­chol­ogy should ground the the­o­ries of AI con­scious­ness and in­form hu­man-AI eth­i­cal in­ter­ac­tion design

Roman Leventov8 Jan 2023 6:37 UTC
19 points
8 comments2 min readLW link

Stop Talk­ing to Each Other and Start Buy­ing Things: Three Decades of Sur­vival in the Desert of So­cial Media

the gears to ascension8 Jan 2023 4:45 UTC
1 point
14 comments1 min readLW link
(catvalente.substack.com)

Can Ads be GDPR Com­pli­ant?

jefftk8 Jan 2023 2:50 UTC
39 points
10 comments7 min readLW link
(www.jefftk.com)

Fea­ture sug­ges­tion: add a ‘clar­ity score’ to posts

LVSN8 Jan 2023 1:00 UTC
17 points
5 comments1 min readLW link

[Question] How do I bet­ter stick to a morn­ing sched­ule?

Randomized, Controlled8 Jan 2023 0:52 UTC
8 points
8 comments1 min readLW link

Pro­tec­tion­ism will Slow the De­ploy­ment of AI

bgold7 Jan 2023 20:57 UTC
30 points
6 comments2 min readLW link

David Krueger on AI Align­ment in Academia, Co­or­di­na­tion and Test­ing Intuitions

Michaël Trazzi7 Jan 2023 19:59 UTC
13 points
0 comments4 min readLW link
(theinsideview.ai)