Archive
Sequences
About
Search
Log In
Questions
Events
Shortform
Alignment Forum
AF Comments
Home
Featured
All
Tags
Recent
Comments
RSS
Cause Prioritization
Tag
Relevant
New
Old
Why I’m Skeptical About Unproven Causes (And You Should Be Too)
Peter Wildeford
Jul 29, 2013, 9:09 AM
42
points
98
comments
11
min read
LW
link
[Question]
What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause?
David Scott Krueger (formerly: capybaralet)
Aug 20, 2019, 9:45 PM
29
points
27
comments
1
min read
LW
link
S-risks: Why they are the worst existential risks, and how to prevent them
Kaj_Sotala
Jun 20, 2017, 12:34 PM
31
points
106
comments
1
min read
LW
link
(foundational-research.org)
Differential knowledge interconnection
Roman Leventov
Oct 12, 2024, 12:52 PM
6
points
0
comments
7
min read
LW
link
Prioritization Research for Advancing Wisdom and Intelligence
ozziegooen
Oct 18, 2021, 10:28 PM
49
points
8
comments
5
min read
LW
link
(forum.effectivealtruism.org)
Efficient Charity: Do Unto Others...
Scott Alexander
Dec 24, 2010, 9:26 PM
207
points
322
comments
6
min read
LW
link
[Question]
What’s the best ratio for Africans to starve compared to Ukrainians not dying in the war?
ChristianKl
Mar 10, 2022, 6:52 PM
9
points
28
comments
1
min read
LW
link
Why SENS makes sense
emanuele ascani
Feb 22, 2020, 4:28 PM
28
points
2
comments
31
min read
LW
link
Cause Awareness as a Factor against Cause Neutrality
Darmani
Aug 13, 2018, 8:00 PM
39
points
4
comments
2
min read
LW
link
Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”
AnnaSalamon
Dec 12, 2016, 7:39 PM
65
points
38
comments
5
min read
LW
link
Why CFAR’s Mission?
AnnaSalamon
Jan 2, 2016, 11:23 PM
59
points
56
comments
9
min read
LW
link
Longtermist Implications of the Existence Neutrality Hypothesis
Maxime Riché
Mar 20, 2025, 12:20 PM
3
points
2
comments
21
min read
LW
link
Longtermist implications of aliens Space-Faring Civilizations—Introduction
Maxime Riché
Feb 21, 2025, 12:08 PM
21
points
0
comments
6
min read
LW
link
Other Civilizations Would Recover 84+% of Our Cosmic Resources—A Challenge to Extinction Risk Prioritization
Maxime Riché
Mar 17, 2025, 1:12 PM
5
points
0
comments
12
min read
LW
link
The Convergent Path to the Stars
Maxime Riché
Mar 18, 2025, 5:09 PM
6
points
0
comments
20
min read
LW
link
Maximizing Cost-effectiveness via Critical Inquiry
HoldenKarnofsky
Nov 10, 2011, 7:25 PM
31
points
24
comments
7
min read
LW
link
Preserving our heritage: Building a movement and a knowledge ark for current and future generations
rnk8
Nov 29, 2023, 7:20 PM
0
points
5
comments
12
min read
LW
link
Improving local governance in fragile states—practical lessons from the field
Tim Liptrot
Jul 29, 2020, 1:54 AM
16
points
3
comments
6
min read
LW
link
On Doing the Improbable
Eliezer Yudkowsky
Oct 28, 2018, 8:09 PM
130
points
36
comments
1
min read
LW
link
1
review
Caring less
eukaryote
Mar 13, 2018, 10:53 PM
73
points
24
comments
4
min read
LW
link
3
reviews
Is voting theory important? An attempt to check my bias.
Jameson Quinn
Feb 17, 2019, 11:45 PM
42
points
14
comments
6
min read
LW
link
Robustness of Cost-Effectiveness Estimates and Philanthropy
JonahS
May 24, 2013, 8:28 PM
56
points
37
comments
6
min read
LW
link
Ben Hoffman’s donor recommendations
Rob Bensinger
Jun 21, 2018, 4:02 PM
41
points
19
comments
1
min read
LW
link
Is GiveWell.org the best charity (excluding SIAI)?
syllogism
Feb 26, 2011, 1:37 PM
52
points
61
comments
2
min read
LW
link
OpenPhil’s “Update on Cause Prioritization / Worldview Diversification”
Raemon
Jan 31, 2018, 5:39 AM
13
points
0
comments
1
min read
LW
link
(www.openphilanthropy.org)
On characterizing heavy-tailedness
Jsevillamol
Feb 16, 2020, 12:14 AM
38
points
6
comments
4
min read
LW
link
80,000 Hours: EA and Highly Political Causes
The_Jaded_One
Jan 26, 2017, 9:44 PM
45
points
25
comments
7
min read
LW
link
Efficient Charity
multifoliaterose
Dec 4, 2010, 10:27 AM
42
points
185
comments
9
min read
LW
link
Responses to questions on donating to 80k, GWWC, EAA and LYCS
wdmacaskill
Nov 20, 2012, 10:41 PM
34
points
20
comments
13
min read
LW
link
Defeating Mundane Holocausts With Robots
lsparrish
May 30, 2011, 10:34 PM
34
points
28
comments
2
min read
LW
link
Oxford Prioritisation Project Review
[deleted]
Oct 13, 2017, 11:07 PM
11
points
6
comments
22
min read
LW
link
Are long-term investments a good way to help the future?
Dacyn
Apr 30, 2018, 2:41 PM
10
points
50
comments
3
min read
LW
link
A case for strategy research: what it is and why we need more of it
Siebe
Jun 20, 2019, 8:22 PM
24
points
19
comments
20
min read
LW
link
[Link]: GiveWell is aiming to have a new #1 charity by December
Normal_Anomaly
Nov 29, 2011, 3:11 AM
29
points
4
comments
1
min read
LW
link
The (short) case for predicting what Aliens value
Jim Buhler
Jul 20, 2023, 3:25 PM
14
points
5
comments
3
min read
LW
link
Giving Tuesday 2020
jefftk
Nov 30, 2020, 10:30 PM
28
points
0
comments
1
min read
LW
link
(www.jefftk.com)
How a billionaire could spend their money to help the disadvantaged: 7 ideas from the top of my head
Yitz
Dec 4, 2020, 6:09 AM
12
points
12
comments
6
min read
LW
link
Probability and Politics
CarlShulman
Nov 24, 2010, 5:02 PM
28
points
31
comments
5
min read
LW
link
What To Do: Environmentalism vs Friendly AI (John Baez)
XiXiDu
Apr 24, 2011, 6:03 PM
31
points
63
comments
2
min read
LW
link
[Question]
People are gathering 2 million USD to save a kid with a rare disease. I feel weird about it. Why?
hookdump
Apr 2, 2021, 11:00 PM
9
points
7
comments
1
min read
LW
link
Overview of Rethink Priorities’ work on risks from nuclear weapons
MichaelA
Jun 11, 2021, 8:05 PM
12
points
0
comments
3
min read
LW
link
Announcing the Nuclear Risk Forecasting Tournament
MichaelA
Jun 16, 2021, 4:16 PM
16
points
2
comments
2
min read
LW
link
The Bunny: An EA Short Story
JohnGreer
Aug 21, 2022, 8:59 PM
15
points
0
comments
6
min read
LW
link
Five Areas I Wish EAs Gave More Focus
Prometheus
Oct 27, 2022, 6:13 AM
13
points
18
comments
1
min read
LW
link
Does “Ultimate Neartermism” via Eternal Inflation dominate Longtermism in expectation?
Jordan Arel
Aug 17, 2024, 10:28 PM
6
points
1
comment
3
min read
LW
link
Attention on AI X-Risk Likely Hasn’t Distracted from Current Harms from AI
Erich_Grunewald
Dec 21, 2023, 5:24 PM
26
points
2
comments
17
min read
LW
link
(www.erichgrunewald.com)
Comparing Alignment to other AGI interventions: Basic model
Martín Soto
Mar 20, 2024, 6:17 PM
12
points
4
comments
7
min read
LW
link
Why I stopped working on AI safety
jbkjr
May 2, 2024, 5:08 AM
−5
points
0
comments
4
min read
LW
link
(jbkjr.me)
Super human AI is a very low hanging fruit!
Hzn
Dec 26, 2024, 7:00 PM
−4
points
0
comments
7
min read
LW
link
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety—A Pilot Retrospective
Alvin Ånestrand
,
Jonas Hallgren
and
Utilop
Jan 10, 2025, 4:22 PM
21
points
0
comments
4
min read
LW
link
Two arguments against longtermist thought experiments
momom2
Nov 2, 2024, 10:22 AM
15
points
5
comments
3
min read
LW
link
Reducing x-risk might be actively harmful
MountainPath
Nov 18, 2024, 2:25 PM
5
points
5
comments
1
min read
LW
link
A case for donating to AI risk reduction (including if you work in AI)
tlevin
Dec 2, 2024, 7:05 PM
61
points
2
comments
1
min read
LW
link
Decision-Relevance of worlds and ADT implementations
Maxime Riché
Mar 6, 2025, 4:57 PM
9
points
0
comments
15
min read
LW
link
No comments.
Back to top
N
W
F
A
C
D
E
F
G
H
I
Customize appearance
Current theme:
default
A
C
D
E
F
G
H
I
Less Wrong (text)
Less Wrong (link)
Invert colors
Reset to defaults
OK
Cancel
Hi, I’m Bobby the Basilisk! Click on the minimize button (
) to minimize the theme tweaker window, so that you can see what the page looks like with the current tweaked values. (But remember,
the changes won’t be saved until you click “OK”!
)
Theme tweaker help
Show Bobby the Basilisk
OK
Cancel