RSS

Risks of Astro­nom­i­cal Suffer­ing (S-risks)

TagLast edit: Apr 25, 2021, 1:01 PM by eFish

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom’s original definition, as they threaten to “permanently and drastically curtail [Earth-originating intelligent life’s] potential”. Most existential risks are of the form “event E happens which drastically reduces the number of conscious experiences in the future”. S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as “pets,” limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of “fates worse than death” on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since “tiling the universe with experiences of severe suffering” would likely be worse than death.

There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR’s work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks too.

Another approach to reducing s-risk is to “expand the moral circle” together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links

The case against AI alignment

andrew sauerDec 24, 2022, 6:57 AM
126 points
110 comments5 min readLW link

S-Risks: Fates Worse Than Ex­tinc­tion

May 4, 2024, 3:30 PM
53 points
2 comments6 min readLW link
(youtu.be)

S-risks: Why they are the worst ex­is­ten­tial risks, and how to pre­vent them

Kaj_SotalaJun 20, 2017, 12:34 PM
31 points
106 comments1 min readLW link
(foundational-research.org)

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseCliftonDec 13, 2019, 9:02 PM
62 points
10 comments2 min readLW link

Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing (S-Risks): A Ne­glected Global Priority

ignorancepriorOct 14, 2016, 7:58 PM
13 points
4 comments1 min readLW link
(foundational-research.org)

New book on s-risks

Tobias_BaumannOct 28, 2022, 9:36 AM
68 points
1 comment1 min readLW link

[Question] Out­come Ter­minol­ogy?

DachSep 14, 2020, 6:04 PM
6 points
0 comments1 min readLW link

How eas­ily can we sep­a­rate a friendly AI in de­sign space from one which would bring about a hy­per­ex­is­ten­tial catas­tro­phe?

AnirandisSep 10, 2020, 12:40 AM
20 points
19 comments2 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseCliftonDec 20, 2019, 3:52 AM
27 points
4 comments10 min readLW link

[Question] How likely are sce­nar­ios where AGI ends up overtly or de facto tor­tur­ing us? How likely are sce­nar­ios where AGI pre­vents us from com­mit­ting suicide or dy­ing?

JohnGreerMar 28, 2023, 6:00 PM
11 points
4 comments1 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseCliftonDec 17, 2019, 9:46 PM
20 points
2 comments12 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseCliftonDec 17, 2019, 9:27 PM
35 points
8 comments14 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseCliftonDec 22, 2019, 2:05 AM
14 points
4 comments8 min readLW link

Mini map of s-risks

turchinJul 8, 2017, 12:33 PM
6 points
34 comments2 min readLW link

[un­ti­tled post]

superads91Feb 4, 2022, 7:03 PM
14 points
17 comments1 min readLW link

[Question] How likely do you think worse-than-ex­tinc­tion type fates to be?

span1Aug 1, 2022, 4:08 AM
3 points
3 comments1 min readLW link

Prevent­ing s-risks via in­dex­i­cal un­cer­tainty, acausal trade and dom­i­na­tion in the multiverse

avturchinSep 27, 2018, 10:09 AM
12 points
6 comments4 min readLW link

The Dilemma of Worse Than Death Scenarios

arkaeikJul 10, 2018, 9:18 AM
14 points
18 comments4 min readLW link

A fate worse than death?

RomanSDec 13, 2021, 11:05 AM
−25 points
26 comments2 min readLW link

The Se­cu­rity Mind­set, S-Risk and Pub­lish­ing Pro­saic Align­ment Research

lukemarksApr 22, 2023, 2:36 PM
39 points
7 comments5 min readLW link

How likely do you think worse-than-ex­tinc­tion type fates to be?

span1Mar 24, 2023, 9:03 PM
5 points
4 comments1 min readLW link

[Question] p(s-risks to con­tem­po­rary hu­mans)?

mhamptonFeb 8, 2025, 9:19 PM
6 points
5 comments6 min readLW link

[Question] Like­li­hood of hy­per­ex­is­ten­tial catas­tro­phe from a bug?

AnirandisJun 18, 2020, 4:23 PM
14 points
27 comments1 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_ArmstrongApr 7, 2014, 11:00 AM
83 points
418 comments7 min readLW link

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_GloorFeb 15, 2023, 1:01 PM
49 points
1 comment1 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang110May 15, 2023, 5:42 PM
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

Rosko’s Wager

WukshMay 16, 2023, 7:18 AM
1 point
0 comments2 min readLW link

Risk of Mass Hu­man Suffer­ing /​ Ex­tinc­tion due to Cli­mate Emer­gency

willfranksMar 14, 2019, 6:32 PM
4 points
3 comments1 min readLW link

Con­sti­tu­tions for ASI?

ukc10014Jan 28, 2025, 4:32 PM
3 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

[Question] What are some sce­nar­ios where an al­igned AGI ac­tu­ally helps hu­man­ity, but many/​most peo­ple don’t like it?

RomanSJan 10, 2025, 6:13 PM
13 points
6 comments3 min readLW link

The Waluigi Effect (mega-post)

Cleo NardoMar 3, 2023, 3:22 AM
628 points
188 comments16 min readLW link

Avert­ing suffer­ing with sen­tience throt­tlers (pro­posal)

QuinnApr 5, 2021, 10:54 AM
8 points
7 comments3 min readLW link

CLR’s re­cent work on multi-agent systems

JesseCliftonMar 9, 2021, 2:28 AM
54 points
2 comments13 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStubDec 28, 2021, 5:58 AM
15 points
3 comments24 min readLW link

[Question] What are the sur­viv­ing wor­lds like?

KvmanThinkingFeb 17, 2025, 12:41 AM
21 points
2 comments1 min readLW link

Paradigm-build­ing from first prin­ci­ples: Effec­tive al­tru­ism, AGI, and alignment

Cameron BergFeb 8, 2022, 4:12 PM
29 points
5 comments14 min readLW link

Sen­tience In­sti­tute 2023 End of Year Summary

michael_delloNov 27, 2023, 12:11 PM
11 points
0 comments5 min readLW link
(www.sentienceinstitute.org)

Mak­ing AIs less likely to be spiteful

Sep 26, 2023, 2:12 PM
116 points
4 comments10 min readLW link

[Question] Should you re­frain from hav­ing chil­dren be­cause of the risk posed by ar­tifi­cial in­tel­li­gence?

MientrasSep 9, 2022, 5:39 PM
17 points
31 comments1 min readLW link

Briefly how I’ve up­dated since ChatGPT

rimeApr 25, 2023, 2:47 PM
48 points
2 comments2 min readLW link

AE Stu­dio @ SXSW: We need more AI con­scious­ness re­search (and fur­ther re­sources)

Mar 26, 2024, 8:59 PM
67 points
8 comments3 min readLW link
No comments.