Should you have chil­dren? A de­ci­sion frame­work for a cru­cial life choice that af­fects your­self, your child and the world

Sherrinford4 Dec 2024 23:14 UTC
0 points
1 comment20 min readLW link

CCing Mailing Lists on Ex­ter­nal Communication

jefftk4 Dec 2024 22:00 UTC
9 points
0 comments1 min readLW link
(www.jefftk.com)

Pick­ing favourites is hard

dkl94 Dec 2024 20:46 UTC
11 points
3 comments1 min readLW link
(dkl9.net)

[Question] How can I con­vince my cryp­to­bro friend that S&P500 is effi­cient?

AhmedNeedsATherapist4 Dec 2024 20:04 UTC
−7 points
10 comments1 min readLW link

The 2023 LessWrong Re­view: The Ba­sic Ask

Raemon4 Dec 2024 19:52 UTC
74 points
25 comments9 min readLW link

Is the AI Dooms­day Nar­ra­tive the Product of a Big Tech Con­spir­acy?

garrison4 Dec 2024 19:20 UTC
35 points
1 comment1 min readLW link
(garrisonlovely.substack.com)

[Question] AI box question

KvmanThinking4 Dec 2024 19:03 UTC
2 points
2 comments1 min readLW link

The Po­lite Coup

Charlie Sanders4 Dec 2024 14:03 UTC
3 points
0 comments3 min readLW link
(www.dailymicrofiction.com)

Anal­y­sis of Global AI Gover­nance Strategies

4 Dec 2024 10:45 UTC
38 points
10 comments36 min readLW link

[Question] Cry­on­ics con­sid­er­a­tions: how big of a prob­lem is is­chemia?

kman4 Dec 2024 4:45 UTC
8 points
1 comment1 min readLW link

AI #93: Happy Tuesday

Zvi4 Dec 2024 0:30 UTC
26 points
2 comments23 min readLW link
(thezvi.wordpress.com)

A Qual­i­ta­tive Case for LTFF: Filling Crit­i­cal Ecosys­tem Gaps

Linch3 Dec 2024 21:57 UTC
64 points
2 comments1 min readLW link

Deep Causal Transcod­ing: A Frame­work for Mechanis­ti­cally Elic­it­ing La­tent Be­hav­iors in Lan­guage Models

3 Dec 2024 21:19 UTC
83 points
7 comments41 min readLW link

“Align­ment at Large”: Bend­ing the Arc of His­tory Towards Life-Affirm­ing Futures

welfvh3 Dec 2024 21:17 UTC
5 points
0 comments4 min readLW link

Roots of Progress is hiring an event manager

jasoncrawford3 Dec 2024 20:46 UTC
10 points
0 comments7 min readLW link
(rootsofprogress.notion.site)

Do simu­lacra dream of digi­tal sheep?

EuanMcLean3 Dec 2024 20:25 UTC
16 points
36 comments10 min readLW link

Orca com­mu­ni­ca­tion pro­ject—seek­ing feed­back (and col­lab­o­ra­tors)

Towards_Keeperhood3 Dec 2024 17:29 UTC
35 points
16 comments2 min readLW link

Book a Time to Chat about In­terp Research

Logan Riggs3 Dec 2024 17:27 UTC
47 points
3 comments1 min readLW link

Balsa Re­search 2024 Update

Zvi3 Dec 2024 12:30 UTC
19 points
0 comments5 min readLW link
(thezvi.wordpress.com)

First Solo Bus Ride

jefftk3 Dec 2024 12:20 UTC
28 points
1 comment1 min readLW link
(www.jefftk.com)

How to make evals for the AISI evals bounty

TheManxLoiner3 Dec 2024 10:44 UTC
8 points
0 comments5 min readLW link

Should there be just one west­ern AGI pro­ject?

3 Dec 2024 10:11 UTC
78 points
72 comments15 min readLW link

Cog­ni­tive Bi­ases Con­tribut­ing to AI X-risk — a deleted ex­cerpt from my 2018 ARCHES draft

Andrew_Critch3 Dec 2024 9:29 UTC
46 points
2 comments5 min readLW link

[Question] What is your opinion of Dr. An­gelo Dilullo(med­i­ta­tion)?

Suh_Prance_Alot3 Dec 2024 5:54 UTC
0 points
0 comments1 min readLW link

Chem­i­cal Tur­ing Machines

Yudhister Kumar3 Dec 2024 5:26 UTC
10 points
2 comments4 min readLW link
(www.yudhister.me)

MIRI’s 2024 End-of-Year Update

Rob Bensinger3 Dec 2024 4:33 UTC
98 points
2 comments4 min readLW link

Linkpost: Rat Traps by Sheon Han in As­ter­isk Mag

Chris_Leong3 Dec 2024 3:22 UTC
12 points
5 comments1 min readLW link
(asteriskmag.com)

[Question] Who are the worth­while non-Euro­pean pre-In­dus­trial thinkers?

Lorec3 Dec 2024 1:45 UTC
12 points
4 comments1 min readLW link

A Para­dox of Si­mu­lated Suffering

arusarda2 Dec 2024 23:44 UTC
−1 points
3 comments1 min readLW link

Levels of Thought: from Points to Fields

HNX2 Dec 2024 20:25 UTC
4 points
2 comments23 min readLW link

From Code to Manag­ing: Why Be­ing a ‘Force Mul­ti­plier’ Mat­ters to Me More Than Be­ing a Cod­ing Wizard

cloak2 Dec 2024 20:10 UTC
−3 points
0 comments1 min readLW link
(www.reddit.com)

A case for donat­ing to AI risk re­duc­tion (in­clud­ing if you work in AI)

tlevin2 Dec 2024 19:05 UTC
61 points
2 comments1 min readLW link

Fer­til­ity Roundup #4

Zvi2 Dec 2024 14:30 UTC
35 points
16 comments49 min readLW link
(thezvi.wordpress.com)

Con­jec­ture: A Roadmap for Cog­ni­tive Soft­ware and A Hu­man­ist Fu­ture of AI

2 Dec 2024 13:28 UTC
44 points
10 comments29 min readLW link
(www.conjecture.dev)

2024 Unoffi­cial LessWrong Cen­sus/​Survey

Screwtape2 Dec 2024 5:30 UTC
92 points
42 comments1 min readLW link

Drexler’s Nan­otech Software

PeterMcCluskey2 Dec 2024 4:55 UTC
65 points
9 comments4 min readLW link
(bayesianinvestor.com)

Sorry for the down­time, looks like we got DDosd

habryka2 Dec 2024 4:14 UTC
109 points
13 comments1 min readLW link

[Question] Is mal­ice a real emo­tion?

landscape_kiwi1 Dec 2024 23:47 UTC
7 points
5 comments1 min readLW link

Teach­ing My Younger Self to Pro­gram: A case study of how I’d pass on my skill at self-learning

Shoshannah Tekofsky1 Dec 2024 21:05 UTC
25 points
1 comment7 min readLW link
(thinkfeelplay.substack.com)

[Question] Which Bi­ases are most im­por­tant to Over­come?

abstractapplic1 Dec 2024 15:40 UTC
35 points
24 comments1 min readLW link

Com­ment­ing Pat­terns by Platform

jefftk1 Dec 2024 11:50 UTC
12 points
0 comments1 min readLW link
(www.jefftk.com)

[Let­ter] Chi­nese Quickstart

lsusr1 Dec 2024 6:38 UTC
32 points
0 comments5 min readLW link

AXRP Epi­sode 39 - Evan Hub­inger on Model Or­ganisms of Misalignment

DanielFilan1 Dec 2024 6:00 UTC
41 points
0 comments67 min readLW link

Mag­ni­tudes: Let’s Com­pre­hend the In­com­pre­hen­si­ble!

joec1 Dec 2024 3:08 UTC
21 points
8 comments3 min readLW link

[Question] Why does ChatGPT throw an er­ror when out­putting “David Mayer”?

Archimedes1 Dec 2024 0:11 UTC
6 points
9 comments1 min readLW link

In­tro­duc­ing the An­thropic Fel­lows Program

30 Nov 2024 23:47 UTC
26 points
0 comments4 min readLW link
(alignment.anthropic.com)

The Shape of Heaven

ejk6430 Nov 2024 23:38 UTC
15 points
1 comment5 min readLW link

AI Train­ing Opt-Outs Re­in­force Global Power Asymmetries

kushagra30 Nov 2024 22:08 UTC
3 points
0 comments6 min readLW link

Vi­sual demon­stra­tion of Op­ti­mizer’s curse

Roman Malov30 Nov 2024 19:34 UTC
24 points
3 comments7 min readLW link

CAIDP State­ment on Lethal Au­tonomous Weapons Systems

Heramb30 Nov 2024 18:16 UTC
−1 points
0 comments1 min readLW link
(forum.effectivealtruism.org)