RSS

Longtermism

TagLast edit: Dec 22, 2022, 5:49 AM by Multicore

Longtermism[1][2] is a philosophy that future lives matter and that we have a similar obligation to them as we do to lives around currently. William Macaskill states it in three clauses[3]:

[broad description of philosophy, something about WWOTF]

Criticisms and responses

  1. ^
  2. ^
  3. ^

In Defence of Tem­po­ral Dis­count­ing in Longter­mist Ethics

DragonGodNov 13, 2022, 9:54 PM
25 points
4 comments1 min readLW link

An an­i­mated in­tro­duc­tion to longter­mism (feat. Robert Miles)

WriterJun 21, 2021, 7:24 PM
18 points
4 comments4 min readLW link
(youtu.be)

Ag­grega­tive prin­ci­ples ap­prox­i­mate util­i­tar­ian principles

Cleo NardoJun 12, 2024, 4:27 PM
28 points
3 comments23 min readLW link

Zvi’s Thoughts on His 2nd Round of SFF

ZviNov 20, 2024, 1:40 PM
91 points
2 comments10 min readLW link
(thezvi.wordpress.com)

The Most Im­por­tant Cen­tury: The Animation

Jul 24, 2022, 8:58 PM
46 points
2 comments20 min readLW link
(youtu.be)

[Question] “Fa­nat­i­cal” Longter­mists: Why is Pas­cal’s Wager wrong?

YitzJul 27, 2022, 4:16 AM
3 points
7 comments1 min readLW link

Long Now, and Cul­ture vs Artifacts

RaemonFeb 3, 2020, 9:49 PM
26 points
3 comments6 min readLW link

Ap­prais­ing ag­grega­tivism and utilitarianism

Cleo NardoJun 21, 2024, 11:10 PM
27 points
10 comments19 min readLW link

Don’t leave your finger­prints on the future

So8resOct 8, 2022, 12:35 AM
131 points
48 comments5 min readLW link

LTFF and EAIF are un­usu­ally fund­ing-con­strained right now

Aug 30, 2023, 1:03 AM
90 points
24 comments15 min readLW link
(forum.effectivealtruism.org)

Pos­si­ble Diver­gence in AGI Risk Tol­er­ance be­tween Selfish and Altru­is­tic agents

Brad West Sep 9, 2023, 12:23 AM
1 point
1 comment2 min readLW link

Ag­grega­tive Prin­ci­ples of So­cial Justice

Cleo NardoJun 5, 2024, 1:44 PM
29 points
10 comments37 min readLW link

[Book Re­view] Destiny Disrupted

lsusrMar 21, 2021, 7:09 AM
58 points
4 comments9 min readLW link

Three Fables of Mag­i­cal Girls and Longtermism

Ulisse MiniDec 2, 2022, 10:01 PM
31 points
11 comments2 min readLW link

Longter­mism vs short-ter­mism for per­sonal life extension

Mati_RoyJul 17, 2021, 3:52 AM
12 points
2 comments2 min readLW link

Altru­ism Un­der Ex­treme Uncertainty

lsusrAug 27, 2021, 6:58 AM
37 points
9 comments2 min readLW link

Grabby Aliens could be Good, could be Bad

mako yassMar 7, 2022, 1:24 AM
28 points
10 comments4 min readLW link

Value Deathism

Vladimir_NesovOct 30, 2010, 6:20 PM
26 points
121 comments1 min readLW link

Mas­sive consequences

KatjaGraceFeb 7, 2021, 5:30 AM
23 points
15 comments1 min readLW link
(worldspiritsockpuppet.com)

My Most Likely Rea­son to Die Young is AI X-Risk

AISafetyIsNotLongtermistJul 4, 2022, 5:08 PM
61 points
24 comments4 min readLW link
(forum.effectivealtruism.org)

An­nounc­ing the EA Archive

Aaron BergmanJul 6, 2023, 1:49 PM
13 points
2 comments1 min readLW link

The ex­pected value of the long-term future

[deleted]Dec 28, 2017, 10:46 PM
11 points
5 comments1 min readLW link

Nick Beck­stead: On the Over­whelming Im­por­tance of Shap­ing the Far Future

Paul CrowleyJun 26, 2013, 1:17 PM
10 points
20 comments1 min readLW link

A Toy Model of Hingeyness

B JacobsSep 7, 2020, 5:38 PM
16 points
10 comments4 min readLW link

The Promises and Pit­falls of Long-Term Forecasting

GeoVaneSep 11, 2023, 5:04 AM
1 point
0 comments5 min readLW link

An­nounc­ing Fu­ture Fo­rum—Ap­ply Now

Jul 11, 2022, 10:57 PM
8 points
0 comments4 min readLW link
(forum.effectivealtruism.org)

En­light­en­ment Values in a Vuln­er­a­ble World

Maxwell TabarrokJul 20, 2022, 7:52 PM
15 points
6 comments31 min readLW link
(maximumprogress.substack.com)

Matt Ygle­sias on AI Policy

Grant DemareeAug 17, 2022, 11:57 PM
25 points
1 comment1 min readLW link
(www.slowboring.com)

Emily Brontë on: Psy­chol­ogy Re­quired for Se­ri­ous™ AGI Safety Research

robertzkSep 14, 2022, 2:47 PM
2 points
0 comments1 min readLW link

A Longter­mist case against Veganism

Connor TabarrokOct 20, 2022, 2:30 PM
−3 points
3 comments1 min readLW link

Res­lab Re­quest for In­for­ma­tion: EA hard­ware projects

Joel BeckerOct 26, 2022, 9:13 PM
10 points
0 comments1 min readLW link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan ArelAug 17, 2024, 10:28 PM
6 points
1 comment3 min readLW link

Fair Col­lec­tive Effi­cient Altruism

Jobst HeitzigNov 25, 2022, 9:38 AM
2 points
1 comment5 min readLW link

How Many Lives Does X-Risk Work Save From Nonex­is­tence On Aver­age?

Jordan ArelDec 8, 2022, 9:57 PM
4 points
5 comments14 min readLW link

In­tro­duc­ing The Log­i­cal Foun­da­tion, A Plan to End Poverty With Guaran­teed Income

Michael SimmNov 18, 2022, 8:13 AM
9 points
23 comments1 min readLW link

SBF x LoL

Nicholas / Heather KrossNov 15, 2022, 8:24 PM
17 points
6 comments1 min readLW link

The Astro­nom­i­cal Sacri­fice Dilemma

Matthew McRedmondMar 11, 2024, 7:58 PM
15 points
3 comments4 min readLW link

[Linkpost] Leif We­nar’s The Deaths of Effec­tive Altruism

ArdenMar 27, 2024, 7:17 PM
8 points
1 comment1 min readLW link
(www.wired.com)

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ankFeb 22, 2025, 12:12 AM
1 point
0 comments6 min readLW link

Last Line of Defense: Min­i­mum Vi­able Shelters for Mir­ror Bacteria

Ulrik HornDec 21, 2024, 8:28 AM
11 points
26 comments21 min readLW link

Two ar­gu­ments against longter­mist thought experiments

momom2Nov 2, 2024, 10:22 AM
15 points
5 comments3 min readLW link

Con­sti­tu­tions for ASI?

ukc10014Jan 28, 2025, 4:32 PM
3 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

How To Prevent a Dystopia

ankJan 29, 2025, 2:16 PM
−3 points
4 comments1 min readLW link

Are we the Wolves now? Hu­man Eu­gen­ics un­der AI Control

BritJan 30, 2025, 8:31 AM
−2 points
1 comment2 min readLW link

Ra­tional Utopia & Nar­row Way There: Mul­tiver­sal AI Align­ment, Non-Agen­tic Static Place AI, New Ethics… (V. 4)

ankFeb 11, 2025, 3:21 AM
13 points
8 comments35 min readLW link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià MoretDec 2, 2023, 2:07 PM
26 points
31 comments42 min readLW link

Static Place AI Makes Agen­tic AI Re­dun­dant: Mul­tiver­sal AI Align­ment & Ra­tional Utopia

ankFeb 13, 2025, 10:35 PM
1 point
2 comments11 min readLW link

[Linkpost] The AGI Show podcast

Soroush PourMay 23, 2023, 9:52 AM
4 points
0 comments1 min readLW link

Why I am not a longter­mist (May 2022)

boazbarakJun 6, 2023, 8:36 PM
38 points
19 comments9 min readLW link
(windowsontheory.org)
No comments.