RSS

Risks of Astro­nom­i­cal Suffer­ing (S-risks)

TagLast edit: 25 Apr 2021 13:01 UTC by eFish

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom’s original definition, as they threaten to “permanently and drastically curtail [Earth-originating intelligent life’s] potential”. Most existential risks are of the form “event E happens which drastically reduces the number of conscious experiences in the future”. S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as “pets,” limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of “fates worse than death” on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since “tiling the universe with experiences of severe suffering” would likely be worse than death.

There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR’s work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks too.

Another approach to reducing s-risk is to “expand the moral circle” together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

See also

External links

The case against AI alignment

andrew sauer24 Dec 2022 6:57 UTC
118 points
110 comments5 min readLW link

S-Risks: Fates Worse Than Ex­tinc­tion

4 May 2024 15:30 UTC
53 points
2 comments6 min readLW link
(youtu.be)

S-risks: Why they are the worst ex­is­ten­tial risks, and how to pre­vent them

Kaj_Sotala20 Jun 2017 12:34 UTC
31 points
106 comments1 min readLW link
(foundational-research.org)

Pre­face to CLR’s Re­search Agenda on Co­op­er­a­tion, Con­flict, and TAI

JesseClifton13 Dec 2019 21:02 UTC
62 points
10 comments2 min readLW link

How eas­ily can we sep­a­rate a friendly AI in de­sign space from one which would bring about a hy­per­ex­is­ten­tial catas­tro­phe?

Anirandis10 Sep 2020 0:40 UTC
20 points
19 comments2 min readLW link

Re­duc­ing Risks of Astro­nom­i­cal Suffer­ing (S-Risks): A Ne­glected Global Priority

ignoranceprior14 Oct 2016 19:58 UTC
13 points
4 comments1 min readLW link
(foundational-research.org)

New book on s-risks

Tobias_Baumann28 Oct 2022 9:36 UTC
68 points
1 comment1 min readLW link

[Question] Out­come Ter­minol­ogy?

Dach14 Sep 2020 18:04 UTC
6 points
0 comments1 min readLW link

[Question] How likely are sce­nar­ios where AGI ends up overtly or de facto tor­tur­ing us? How likely are sce­nar­ios where AGI pre­vents us from com­mit­ting suicide or dy­ing?

JohnGreer28 Mar 2023 18:00 UTC
11 points
4 comments1 min readLW link

Sec­tion 7: Foun­da­tions of Ra­tional Agency

JesseClifton22 Dec 2019 2:05 UTC
14 points
4 comments8 min readLW link

Sec­tions 5 & 6: Con­tem­po­rary Ar­chi­tec­tures, Hu­mans in the Loop

JesseClifton20 Dec 2019 3:52 UTC
27 points
4 comments10 min readLW link

Sec­tions 3 & 4: Cred­i­bil­ity, Peace­ful Bar­gain­ing Mechanisms

JesseClifton17 Dec 2019 21:46 UTC
20 points
2 comments12 min readLW link

Sec­tions 1 & 2: In­tro­duc­tion, Strat­egy and Governance

JesseClifton17 Dec 2019 21:27 UTC
35 points
8 comments14 min readLW link

[un­ti­tled post]

superads914 Feb 2022 19:03 UTC
14 points
17 comments1 min readLW link

Mini map of s-risks

turchin8 Jul 2017 12:33 UTC
6 points
34 comments2 min readLW link

[Question] How likely do you think worse-than-ex­tinc­tion type fates to be?

span11 Aug 2022 4:08 UTC
3 points
3 comments1 min readLW link

Com­plex­ity of value but not dis­value im­plies more fo­cus on s-risk. Mo­ral un­cer­tainty and prefer­ence util­i­tar­i­anism also do.

Chi Nguyen23 Feb 2024 6:10 UTC
54 points
18 comments1 min readLW link

Prevent­ing s-risks via in­dex­i­cal un­cer­tainty, acausal trade and dom­i­na­tion in the multiverse

avturchin27 Sep 2018 10:09 UTC
11 points
6 comments4 min readLW link

The Dilemma of Worse Than Death Scenarios

arkaeik10 Jul 2018 9:18 UTC
14 points
18 comments4 min readLW link

A fate worse than death?

RomanS13 Dec 2021 11:05 UTC
−25 points
26 comments2 min readLW link

Briefly how I’ve up­dated since ChatGPT

rime25 Apr 2023 14:47 UTC
48 points
2 comments2 min readLW link

The Se­cu­rity Mind­set, S-Risk and Pub­lish­ing Pro­saic Align­ment Research

lukemarks22 Apr 2023 14:36 UTC
39 points
7 comments5 min readLW link

Ac­cu­rate Models of AI Risk Are Hyper­ex­is­ten­tial Exfohazards

Thane Ruthenis25 Dec 2022 16:50 UTC
31 points
38 comments9 min readLW link

Siren wor­lds and the per­ils of over-op­ti­mised search

Stuart_Armstrong7 Apr 2014 11:00 UTC
83 points
418 comments7 min readLW link

AI al­ign­ment re­searchers may have a com­par­a­tive ad­van­tage in re­duc­ing s-risks

Lukas_Gloor15 Feb 2023 13:01 UTC
48 points
1 comment1 min readLW link

[Question] (Cross­post) Ask­ing for on­line calls on AI s-risks dis­cus­sions

jackchang11015 May 2023 17:42 UTC
1 point
0 comments1 min readLW link
(forum.effectivealtruism.org)

Rosko’s Wager

Wuksh16 May 2023 7:18 UTC
1 point
0 comments2 min readLW link

Risk of Mass Hu­man Suffer­ing /​ Ex­tinc­tion due to Cli­mate Emer­gency

willfranks14 Mar 2019 18:32 UTC
4 points
3 comments1 min readLW link

How likely do you think worse-than-ex­tinc­tion type fates to be?

span124 Mar 2023 21:03 UTC
5 points
4 comments1 min readLW link

[Question] If AI starts to end the world, is suicide a good idea?

IlluminateReality9 Jul 2024 21:53 UTC
0 points
8 comments1 min readLW link

The Waluigi Effect (mega-post)

Cleo Nardo3 Mar 2023 3:22 UTC
628 points
187 comments16 min readLW link

Suffer­ing-Fo­cused Ethics in the In­finite Uni­verse. How can we re­deem our­selves if Mul­ti­verse Im­mor­tal­ity is real and sub­jec­tive death is im­pos­si­ble.

Szymon Kucharski24 Feb 2021 21:02 UTC
−2 points
4 comments70 min readLW link

Phys­i­cal­ism im­plies ex­pe­rience never dies. So what am I go­ing to ex­pe­rience af­ter it does?

Szymon Kucharski14 Mar 2021 14:45 UTC
−1 points
1 comment30 min readLW link

Avert­ing suffer­ing with sen­tience throt­tlers (pro­posal)

Quinn5 Apr 2021 10:54 UTC
8 points
7 comments3 min readLW link

CLR’s re­cent work on multi-agent systems

JesseClifton9 Mar 2021 2:28 UTC
54 points
2 comments13 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStub28 Dec 2021 5:58 UTC
15 points
3 comments24 min readLW link

Old man’s story

RomanS29 Dec 2023 14:37 UTC
3 points
0 comments1 min readLW link

[un­ti­tled post]

superads916 Feb 2022 20:39 UTC
−5 points
8 comments1 min readLW link

Paradigm-build­ing from first prin­ci­ples: Effec­tive al­tru­ism, AGI, and alignment

Cameron Berg8 Feb 2022 16:12 UTC
29 points
5 comments14 min readLW link

[Question] Should you re­frain from hav­ing chil­dren be­cause of the risk posed by ar­tifi­cial in­tel­li­gence?

Mientras9 Sep 2022 17:39 UTC
17 points
31 comments1 min readLW link

Tak­ing Into Ac­count Sen­tient Non-Hu­mans in AI Am­bi­tious Value Learn­ing: Sen­tien­tist Co­her­ent Ex­trap­o­lated Volition

Adrià Moret2 Dec 2023 14:07 UTC
26 points
31 comments42 min readLW link

Sen­tience In­sti­tute 2023 End of Year Summary

michael_dello27 Nov 2023 12:11 UTC
11 points
0 comments5 min readLW link
(www.sentienceinstitute.org)

AE Stu­dio @ SXSW: We need more AI con­scious­ness re­search (and fur­ther re­sources)

26 Mar 2024 20:59 UTC
67 points
8 comments3 min readLW link

[Question] Like­li­hood of hy­per­ex­is­ten­tial catas­tro­phe from a bug?

Anirandis18 Jun 2020 16:23 UTC
14 points
27 comments1 min readLW link

Mak­ing AIs less likely to be spiteful

26 Sep 2023 14:12 UTC
116 points
4 comments10 min readLW link
No comments.