RSS

Cause Prioritization

Tag

Why I’m Skep­ti­cal About Un­proven Causes (And You Should Be Too)

Peter WildefordJul 29, 2013, 9:09 AM
42 points
98 comments11 min readLW link

[Question] What are the rea­sons to *not* con­sider re­duc­ing AI-Xrisk the high­est pri­or­ity cause?

David Scott Krueger (formerly: capybaralet)Aug 20, 2019, 9:45 PM
29 points
27 comments1 min readLW link

S-risks: Why they are the worst ex­is­ten­tial risks, and how to pre­vent them

Kaj_SotalaJun 20, 2017, 12:34 PM
31 points
106 comments1 min readLW link
(foundational-research.org)

Differ­en­tial knowl­edge interconnection

Roman LeventovOct 12, 2024, 12:52 PM
6 points
0 comments7 min readLW link

Pri­ori­ti­za­tion Re­search for Ad­vanc­ing Wis­dom and Intelligence

ozziegooenOct 18, 2021, 10:28 PM
49 points
8 comments5 min readLW link
(forum.effectivealtruism.org)

Effi­cient Char­ity: Do Unto Others...

Scott AlexanderDec 24, 2010, 9:26 PM
207 points
322 comments6 min readLW link

[Question] What’s the best ra­tio for Afri­cans to starve com­pared to Ukraini­ans not dy­ing in the war?

ChristianKlMar 10, 2022, 6:52 PM
9 points
28 comments1 min readLW link

Why SENS makes sense

emanuele ascaniFeb 22, 2020, 4:28 PM
28 points
2 comments31 min readLW link

Cause Aware­ness as a Fac­tor against Cause Neutrality

DarmaniAug 13, 2018, 8:00 PM
39 points
4 comments2 min readLW link

Fur­ther dis­cus­sion of CFAR’s fo­cus on AI safety, and the good things folks wanted from “cause neu­tral­ity”

AnnaSalamonDec 12, 2016, 7:39 PM
65 points
38 comments5 min readLW link

Why CFAR’s Mis­sion?

AnnaSalamonJan 2, 2016, 11:23 PM
59 points
56 comments9 min readLW link

Longter­mist Im­pli­ca­tions of the Ex­is­tence Neu­tral­ity Hypothesis

Maxime RichéMar 20, 2025, 12:20 PM
3 points
2 comments21 min readLW link

Longter­mist im­pli­ca­tions of aliens Space-Far­ing Civ­i­liza­tions—Introduction

Maxime RichéFeb 21, 2025, 12:08 PM
21 points
0 comments6 min readLW link

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime RichéMar 17, 2025, 1:12 PM
5 points
0 comments12 min readLW link

The Con­ver­gent Path to the Stars

Maxime RichéMar 18, 2025, 5:09 PM
6 points
0 comments20 min readLW link

Max­i­miz­ing Cost-effec­tive­ness via Crit­i­cal Inquiry

HoldenKarnofskyNov 10, 2011, 7:25 PM
31 points
24 comments7 min readLW link

Pre­serv­ing our her­i­tage: Build­ing a move­ment and a knowl­edge ark for cur­rent and fu­ture generations

rnk8Nov 29, 2023, 7:20 PM
0 points
5 comments12 min readLW link

Im­prov­ing lo­cal gov­er­nance in frag­ile states—prac­ti­cal les­sons from the field

Tim LiptrotJul 29, 2020, 1:54 AM
16 points
3 comments6 min readLW link

On Do­ing the Improbable

Eliezer YudkowskyOct 28, 2018, 8:09 PM
130 points
36 comments1 min readLW link1 review

Car­ing less

eukaryoteMar 13, 2018, 10:53 PM
73 points
24 comments4 min readLW link3 reviews

Is vot­ing the­ory im­por­tant? An at­tempt to check my bias.

Jameson QuinnFeb 17, 2019, 11:45 PM
42 points
14 comments6 min readLW link

Ro­bust­ness of Cost-Effec­tive­ness Es­ti­mates and Philanthropy

JonahSMay 24, 2013, 8:28 PM
56 points
37 comments6 min readLW link

Ben Hoff­man’s donor recommendations

Rob BensingerJun 21, 2018, 4:02 PM
41 points
19 comments1 min readLW link

Is GiveWell.org the best char­ity (ex­clud­ing SIAI)?

syllogismFeb 26, 2011, 1:37 PM
52 points
61 comments2 min readLW link

OpenPhil’s “Up­date on Cause Pri­ori­ti­za­tion /​ Wor­ld­view Diver­sifi­ca­tion”

RaemonJan 31, 2018, 5:39 AM
13 points
0 comments1 min readLW link
(www.openphilanthropy.org)

On char­ac­ter­iz­ing heavy-tailedness

JsevillamolFeb 16, 2020, 12:14 AM
38 points
6 comments4 min readLW link

80,000 Hours: EA and Highly Poli­ti­cal Causes

The_Jaded_OneJan 26, 2017, 9:44 PM
45 points
25 comments7 min readLW link

Effi­cient Charity

multifoliateroseDec 4, 2010, 10:27 AM
42 points
185 comments9 min readLW link

Re­sponses to ques­tions on donat­ing to 80k, GWWC, EAA and LYCS

wdmacaskillNov 20, 2012, 10:41 PM
34 points
20 comments13 min readLW link

Defeat­ing Mun­dane Holo­causts With Robots

lsparrishMay 30, 2011, 10:34 PM
34 points
28 comments2 min readLW link

Oxford Pri­ori­ti­sa­tion Pro­ject Review

[deleted]Oct 13, 2017, 11:07 PM
11 points
6 comments22 min readLW link

Are long-term in­vest­ments a good way to help the fu­ture?

DacynApr 30, 2018, 2:41 PM
10 points
50 comments3 min readLW link

A case for strat­egy re­search: what it is and why we need more of it

SiebeJun 20, 2019, 8:22 PM
24 points
19 comments20 min readLW link

[Link]: GiveWell is aiming to have a new #1 char­ity by De­cem­ber

Normal_AnomalyNov 29, 2011, 3:11 AM
29 points
4 comments1 min readLW link

The (short) case for pre­dict­ing what Aliens value

Jim BuhlerJul 20, 2023, 3:25 PM
14 points
5 comments3 min readLW link

Giv­ing Tues­day 2020

jefftkNov 30, 2020, 10:30 PM
28 points
0 comments1 min readLW link
(www.jefftk.com)

How a billion­aire could spend their money to help the dis­ad­van­taged: 7 ideas from the top of my head

YitzDec 4, 2020, 6:09 AM
12 points
12 comments6 min readLW link

Prob­a­bil­ity and Politics

CarlShulmanNov 24, 2010, 5:02 PM
28 points
31 comments5 min readLW link

What To Do: En­vi­ron­men­tal­ism vs Friendly AI (John Baez)

XiXiDuApr 24, 2011, 6:03 PM
31 points
63 comments2 min readLW link

[Question] Peo­ple are gath­er­ing 2 mil­lion USD to save a kid with a rare dis­ease. I feel weird about it. Why?

hookdumpApr 2, 2021, 11:00 PM
9 points
7 comments1 min readLW link

Overview of Re­think Pri­ori­ties’ work on risks from nu­clear weapons

MichaelAJun 11, 2021, 8:05 PM
12 points
0 comments3 min readLW link

An­nounc­ing the Nu­clear Risk Fore­cast­ing Tournament

MichaelAJun 16, 2021, 4:16 PM
16 points
2 comments2 min readLW link

The Bunny: An EA Short Story

JohnGreerAug 21, 2022, 8:59 PM
15 points
0 comments6 min readLW link

Five Areas I Wish EAs Gave More Focus

PrometheusOct 27, 2022, 6:13 AM
13 points
18 comments1 min readLW link

Does “Ul­ti­mate Neart­er­mism” via Eter­nal In­fla­tion dom­i­nate Longter­mism in ex­pec­ta­tion?

Jordan ArelAug 17, 2024, 10:28 PM
6 points
1 comment3 min readLW link

At­ten­tion on AI X-Risk Likely Hasn’t Dis­tracted from Cur­rent Harms from AI

Erich_GrunewaldDec 21, 2023, 5:24 PM
26 points
2 comments17 min readLW link
(www.erichgrunewald.com)

Com­par­ing Align­ment to other AGI in­ter­ven­tions: Ba­sic model

Martín SotoMar 20, 2024, 6:17 PM
12 points
4 comments7 min readLW link

Why I stopped work­ing on AI safety

jbkjrMay 2, 2024, 5:08 AM
−5 points
0 comments4 min readLW link
(jbkjr.me)

Su­per hu­man AI is a very low hang­ing fruit!

HznDec 26, 2024, 7:00 PM
−4 points
0 comments7 min readLW link

The Align­ment Map­ping Pro­gram: Forg­ing In­de­pen­dent Thinkers in AI Safety—A Pilot Retrospective

Jan 10, 2025, 4:22 PM
21 points
0 comments4 min readLW link

Two ar­gu­ments against longter­mist thought experiments

momom2Nov 2, 2024, 10:22 AM
15 points
5 comments3 min readLW link

Re­duc­ing x-risk might be ac­tively harmful

MountainPathNov 18, 2024, 2:25 PM
5 points
5 comments1 min readLW link

A case for donat­ing to AI risk re­duc­tion (in­clud­ing if you work in AI)

tlevinDec 2, 2024, 7:05 PM
61 points
2 comments1 min readLW link

De­ci­sion-Rele­vance of wor­lds and ADT im­ple­men­ta­tions

Maxime RichéMar 6, 2025, 4:57 PM
9 points
0 comments15 min readLW link
No comments.