RSS

Mo­ral Uncertainty

TagLast edit: 4 Oct 2021 9:14 UTC by Alexander

Moral uncertainty (or normative uncertainty) is uncertainty about what we ought, morally, to do given the diversity of moral doctrines. For example, suppose that we knew for certain that new technology would enable more humans to live on another planet with slightly less well-being than on Earth1. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?

Moral uncertainty includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information since it deals also with uncertainty about which moral theory is right. Even with complete information about the world, this kind of uncertainty would still remain 1. In one level of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example, choosing whether to implement or not a new technology (e.g.: AGI, Biological Cognitive Enhancement, Mind Uploading) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequence of new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.

One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” 2 However, we would still need a method of comparing value intertheories, an utilon in one theory may not be the same with an utilon in another theory. Outside consequentialism, many ethical theories don’t use utilions or even any quantifiable values. This is still an open problem.

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord’s proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Even with a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions that seem highly valuable in any theory. Bostrom argues that Existential risk reduction is among them, showing that it is not only the most important task given most versions of consequentialism but highly recommended by many of the other widely acceptable moral theories3.

External links

Sequences

See also

References

  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://​​oxford.academia.edu/​​WilliamCrouch/​​Papers/​​873903/​​Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value

  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories”. ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://​​escholarship.org/​​uc/​​item/​​7h5852rr.pdf

  3. Bostrom, Nick. (2012) “Existential Risk Reduction as the Most Important Task for Humanity” Global Policy, forthcoming, 2012. p. 22. Available at: http://​​www.existential-risk.org/​​concept.pdf

Poly­math-style at­tack on the Par­li­a­men­tary Model for moral uncertainty

danieldewey26 Sep 2014 13:51 UTC
36 points
74 comments4 min readLW link

Normativity

abramdemski18 Nov 2020 16:52 UTC
47 points
11 comments9 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelA7 Jan 2020 10:47 UTC
26 points
17 comments8 min readLW link

Three kinds of moral uncertainty

Kaj_Sotala30 Dec 2012 10:43 UTC
57 points
15 comments2 min readLW link

Mo­ral un­cer­tainty vs re­lated concepts

MichaelA11 Jan 2020 10:03 UTC
26 points
13 comments16 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelA13 Jan 2020 12:13 UTC
14 points
11 comments13 min readLW link

Value uncertainty

MichaelA29 Jan 2020 20:16 UTC
20 points
3 comments14 min readLW link

Mak­ing de­ci­sions un­der moral uncertainty

MichaelA30 Dec 2019 1:49 UTC
21 points
26 comments17 min readLW link

Mak­ing de­ci­sions when both morally and em­piri­cally uncertain

MichaelA2 Jan 2020 7:20 UTC
13 points
14 comments20 min readLW link

Value Uncer­tainty and the Sin­gle­ton Scenario

Wei Dai24 Jan 2010 5:03 UTC
10 points
31 comments3 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilan29 Dec 2020 20:45 UTC
27 points
0 comments28 min readLW link

Re­view and Sum­mary of ‘Mo­ral Uncer­tainty’

fin7 Oct 2020 17:52 UTC
11 points
7 comments1 min readLW link
(www.finmoorhouse.com)

Nick Bostrom: Mo­ral un­cer­tainty – to­wards a solu­tion? [link, 2009]

Kevin8 Mar 2012 11:07 UTC
−5 points
8 comments1 min readLW link

Altru­ism Un­der Ex­treme Uncertainty

lsusr27 Aug 2021 6:58 UTC
37 points
9 comments2 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks18 Dec 2018 4:46 UTC
190 points
26 comments62 min readLW link1 review

Notes on Benevolence

David Gross26 Nov 2020 2:43 UTC
25 points
2 comments8 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks19 Dec 2019 3:00 UTC
130 points
18 comments62 min readLW link

Pre­limi­nary thoughts on moral weight

lukeprog13 Aug 2018 23:45 UTC
93 points
49 comments8 min readLW link2 reviews

Ar­gu­ments for moral indefinability

Richard_Ngo12 Feb 2019 10:40 UTC
50 points
10 comments7 min readLW link
(thinkingcomplete.blogspot.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus Astra16 Apr 2020 0:50 UTC
58 points
27 comments89 min readLW link

On­tolog­i­cal Cri­sis in Humans

Wei Dai18 Dec 2012 17:32 UTC
89 points
69 comments4 min readLW link

Ideas for bench­mark­ing LLM creativity

gwern16 Dec 2024 5:18 UTC
50 points
10 comments1 min readLW link
(gwern.net)

Fun­da­men­tal Uncer­tainty: Chap­ter 3 - Why don’t we agree on what’s right?

Gordon Seidoh Worley25 Jun 2022 17:50 UTC
27 points
22 comments14 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei Dai6 Aug 2014 0:04 UTC
90 points
37 comments3 min readLW link

Sor­ti­tion Model of Mo­ral Uncertainty

B Jacobs8 Oct 2020 17:44 UTC
9 points
2 comments2 min readLW link

Is re­quires ought

jessicata28 Oct 2019 2:36 UTC
30 points
60 comments7 min readLW link
(unstableontology.com)

Whither Mo­ral Progress?

Eliezer Yudkowsky16 Jul 2008 5:04 UTC
24 points
101 comments2 min readLW link

In­visi­ble Frameworks

Eliezer Yudkowsky22 Aug 2008 3:36 UTC
27 points
47 comments6 min readLW link

(Mo­ral) Truth in Fic­tion?

Eliezer Yudkowsky9 Feb 2009 17:26 UTC
25 points
82 comments7 min readLW link

Just a re­minder: Scien­tists are, tech­ni­cally, peo­ple.

PhilGoetz20 Mar 2009 20:33 UTC
8 points
35 comments1 min readLW link

Aver­age util­i­tar­i­anism must be cor­rect?

PhilGoetz6 Apr 2009 17:10 UTC
5 points
168 comments3 min readLW link

My main prob­lem with utilitarianism

taw17 Apr 2009 20:26 UTC
−1 points
84 comments2 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDAS17 Apr 2009 8:01 UTC
7 points
4 comments1 min readLW link

Wed­nes­day de­pends on us.

byrnema29 Apr 2009 3:47 UTC
2 points
43 comments3 min readLW link

Fic­tion of interest

dclayh29 Apr 2009 18:47 UTC
14 points
16 comments1 min readLW link

Con­ven­tions and Con­fus­ing Con­ti­nu­ity Conundrums

Psy-Kosh1 May 2009 1:41 UTC
5 points
9 comments1 min readLW link

The Sword of Good

Eliezer Yudkowsky3 Sep 2009 0:53 UTC
156 points
303 comments2 min readLW link

Es­say-Ques­tion Poll: Die­tary Choices

Alicorn3 May 2009 15:27 UTC
17 points
244 comments1 min readLW link

Re­solv­ing moral un­cer­tainty with randomization

29 Sep 2023 11:23 UTC
7 points
1 comment11 min readLW link

Re­vis­it­ing tor­ture vs. dust specks

cousin_it8 Jul 2009 11:04 UTC
10 points
66 comments2 min readLW link

The sailor’s wife

krbouchard27 Feb 2021 0:23 UTC
4 points
2 comments2 min readLW link

Mo­ral Golems

Erich_Grunewald3 Apr 2021 10:12 UTC
8 points
2 comments6 min readLW link
(www.erichgrunewald.com)

RFC: Meta-eth­i­cal un­cer­tainty in AGI alignment

Gordon Seidoh Worley8 Jun 2018 20:56 UTC
16 points
6 comments3 min readLW link

Which an­i­mals re­al­ize which types of sub­jec­tive welfare?

MichaelStJules27 Feb 2024 19:31 UTC
4 points
0 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete Citrini16 Sep 2021 16:13 UTC
6 points
0 comments8 min readLW link

Mo­rally un­der­defined situ­a­tions can be deadly

Stuart_Armstrong22 Nov 2021 14:48 UTC
17 points
8 comments2 min readLW link

Up­dated Defer­ence is not a strong ar­gu­ment against the util­ity un­cer­tainty ap­proach to alignment

Ivan Vendrov24 Jun 2022 19:32 UTC
26 points
8 comments4 min readLW link

De­ci­sion mak­ing un­der model am­bi­guity, moral un­cer­tainty, and other agents with free will?

Jobst Heitzig13 Nov 2022 12:50 UTC
4 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Ly­ing to Save Humanity

cebsuvx14 Nov 2022 23:04 UTC
−1 points
4 comments1 min readLW link

Fair Col­lec­tive Effi­cient Altruism

Jobst Heitzig25 Nov 2022 9:38 UTC
2 points
1 comment5 min readLW link

A gen­tle apoc­a­lypse

pchvykov16 Aug 2021 5:03 UTC
3 points
5 comments3 min readLW link

The Case Against Mo­ral Realism

Zero Contradictions7 Nov 2024 10:14 UTC
−5 points
10 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Values Darwinism

pchvykov22 Jan 2024 10:44 UTC
11 points
13 comments3 min readLW link

Solu­tion to the two en­velopes prob­lem for moral weights

MichaelStJules19 Feb 2024 0:15 UTC
9 points
1 comment1 min readLW link

[Linkpost] Leif We­nar’s The Deaths of Effec­tive Altruism

Arden27 Mar 2024 19:17 UTC
8 points
1 comment1 min readLW link
(www.wired.com)

Gra­da­tions of moral weight

MichaelStJules29 Feb 2024 23:08 UTC
1 point
0 comments1 min readLW link

Se­quence overview: Welfare and moral weights

MichaelStJules15 Aug 2024 4:22 UTC
7 points
0 comments1 min readLW link

[Question] How to model un­cer­tainty about prefer­ences?

quetzal_rainbow24 Mar 2023 19:04 UTC
10 points
2 comments1 min readLW link

The Screen­play Method

Yeshua God24 Oct 2023 17:41 UTC
−15 points
0 comments25 min readLW link

When is un­al­igned AI morally valuable?

paulfchristiano25 May 2018 1:57 UTC
81 points
53 comments10 min readLW link

Two-Tier Rationalism

Alicorn17 Apr 2009 19:44 UTC
48 points
26 comments4 min readLW link

Real­ism and Rationality

bmgarfinkel16 Sep 2019 3:09 UTC
45 points
49 comments23 min readLW link

Pro­tected From Myself

Eliezer Yudkowsky19 Oct 2008 0:09 UTC
47 points
30 comments6 min readLW link

For the past, in some ways only, we are moral degenerates

Stuart_Armstrong7 Jun 2019 15:57 UTC
32 points
17 comments2 min readLW link

AI should be used to find bet­ter morality

Jorterder2 Jun 2023 20:38 UTC
−21 points
1 comment1 min readLW link

Non­para­met­ric Ethics

Eliezer Yudkowsky20 Jun 2009 11:31 UTC
40 points
60 comments5 min readLW link

Is the po­ten­tial as­tro­nom­i­cal waste in our uni­verse too small to care about?

Wei Dai21 Oct 2014 8:44 UTC
88 points
17 comments2 min readLW link

Hu­man er­rors, hu­man values

PhilGoetz9 Apr 2011 2:50 UTC
45 points
138 comments1 min readLW link

Univer­sal Eudaimonia

hg005 Oct 2020 13:45 UTC
19 points
6 comments2 min readLW link