RSS

Mo­ral Uncertainty

TagLast edit: Oct 4, 2021, 9:14 AM by Alexander

Moral uncertainty (or normative uncertainty) is uncertainty about what we ought, morally, to do given the diversity of moral doctrines. For example, suppose that we knew for certain that new technology would enable more humans to live on another planet with slightly less well-being than on Earth1. An average utilitarian would consider these consequences bad, while a total utilitarian would endorse such technology. If we are uncertain about which of these two theories are right, what should we do?

Moral uncertainty includes a level of uncertainty above the more usual uncertainty of what to do given incomplete information since it deals also with uncertainty about which moral theory is right. Even with complete information about the world, this kind of uncertainty would still remain 1. In one level of uncertainty, one can have doubts on how to act because all the relevant empirical information isn’t available, for example, choosing whether to implement or not a new technology (e.g.: AGI, Biological Cognitive Enhancement, Mind Uploading) not fully knowing about its consequences and nature. But even if we ideally get to know each and every consequence of new technology, we would still need to know which is the right ethical perspective for analyzing these consequences.

One approach is to follow only the most probable theory. This has its own problems. For example, what if the most probable theory points only weakly in one way, and other theories point strongly the other way? A better approach is to “perform the action with the highest expected moral value. We get the expected moral value of an action by multiplying the subjective probability that some theory is true by the value of that action if it is true, doing the same for all of the other theories, and adding up the results.” 2 However, we would still need a method of comparing value intertheories, an utilon in one theory may not be the same with an utilon in another theory. Outside consequentialism, many ethical theories don’t use utilions or even any quantifiable values. This is still an open problem.

Nick Bostrom and Toby Ord have proposed a parliamentary model. In this model, each theory sends a number of delegates to a parliament in proportion to its probability. The theories then bargain for support as if the probability of each action were proportional to its votes. However, the actual output is always the action with the most votes. Bostrom and Ord’s proposal lets probable theories determine most actions, but still gives less probable theories influence on issues they consider unusually important.

Even with a high degree of moral uncertainty and a wide range of possible moral theories, there are still certain actions that seem highly valuable in any theory. Bostrom argues that Existential risk reduction is among them, showing that it is not only the most important task given most versions of consequentialism but highly recommended by many of the other widely acceptable moral theories3.

External links

Sequences

See also

References

  1. Crouch, William. (2010) “Moral Uncertainty and Intertheoretic Comparisons of Value” BPhil Thesis, 2010. p. 6. Available at: http://​​oxford.academia.edu/​​WilliamCrouch/​​Papers/​​873903/​​Moral_Uncertainty_and_Intertheoretic_Comparisons_of_Value

  2. Sepielli, Andrew. (2008) “Moral Uncertainty and the Principle of Equity among Moral Theories”. ISUS-X, Tenth Conference of the International Society for Utilitarian Studies, Kadish Center for Morality, Law and Public Affairs, UC Berkeley. Available at: http://​​escholarship.org/​​uc/​​item/​​7h5852rr.pdf

  3. Bostrom, Nick. (2012) “Existential Risk Reduction as the Most Important Task for Humanity” Global Policy, forthcoming, 2012. p. 22. Available at: http://​​www.existential-risk.org/​​concept.pdf

Poly­math-style at­tack on the Par­li­a­men­tary Model for moral uncertainty

danieldeweySep 26, 2014, 1:51 PM
36 points
74 comments4 min readLW link

Normativity

abramdemskiNov 18, 2020, 4:52 PM
47 points
11 comments9 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelAJan 7, 2020, 10:47 AM
26 points
17 comments8 min readLW link

Three kinds of moral uncertainty

Kaj_SotalaDec 30, 2012, 10:43 AM
57 points
15 comments2 min readLW link

Mo­ral un­cer­tainty vs re­lated concepts

MichaelAJan 11, 2020, 10:03 AM
26 points
13 comments16 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelAJan 13, 2020, 12:13 PM
14 points
11 comments13 min readLW link

Value uncertainty

MichaelAJan 29, 2020, 8:16 PM
20 points
3 comments14 min readLW link

Mak­ing de­ci­sions un­der moral uncertainty

MichaelADec 30, 2019, 1:49 AM
21 points
26 comments17 min readLW link

Mak­ing de­ci­sions when both morally and em­piri­cally uncertain

MichaelAJan 2, 2020, 7:20 AM
13 points
14 comments20 min readLW link

Value Uncer­tainty and the Sin­gle­ton Scenario

Wei DaiJan 24, 2010, 5:03 AM
13 points
31 comments3 min readLW link

AXRP Epi­sode 3 - Ne­go­tiable Re­in­force­ment Learn­ing with An­drew Critch

DanielFilanDec 29, 2020, 8:45 PM
27 points
0 comments28 min readLW link

Re­view and Sum­mary of ‘Mo­ral Uncer­tainty’

finOct 7, 2020, 5:52 PM
11 points
7 comments1 min readLW link
(www.finmoorhouse.com)

Nick Bostrom: Mo­ral un­cer­tainty – to­wards a solu­tion? [link, 2009]

KevinMar 8, 2012, 11:07 AM
−5 points
8 comments1 min readLW link

Altru­ism Un­der Ex­treme Uncertainty

lsusrAug 27, 2021, 6:58 AM
37 points
9 comments2 min readLW link

2018 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 18, 2018, 4:46 AM
190 points
26 comments62 min readLW link1 review

Notes on Benevolence

David GrossNov 26, 2020, 2:43 AM
25 points
2 comments8 min readLW link

2019 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

LarksDec 19, 2019, 3:00 AM
130 points
18 comments62 min readLW link

Pre­limi­nary thoughts on moral weight

lukeprogAug 13, 2018, 11:45 PM
93 points
49 comments8 min readLW link2 reviews

Ar­gu­ments for moral indefinability

Richard_NgoFeb 12, 2019, 10:40 AM
50 points
10 comments7 min readLW link
(thinkingcomplete.blogspot.com)

AI Align­ment Pod­cast: An Overview of Tech­ni­cal AI Align­ment in 2018 and 2019 with Buck Sh­legeris and Ro­hin Shah

Palus AstraApr 16, 2020, 12:50 AM
58 points
27 comments89 min readLW link

On­tolog­i­cal Cri­sis in Humans

Wei DaiDec 18, 2012, 5:32 PM
90 points
69 comments4 min readLW link

Ideas for bench­mark­ing LLM creativity

gwernDec 16, 2024, 5:18 AM
56 points
11 comments1 min readLW link
(gwern.net)

Fun­da­men­tal Uncer­tainty: Chap­ter 3 - Why don’t we agree on what’s right?

Gordon Seidoh WorleyJun 25, 2022, 5:50 PM
27 points
22 comments14 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei DaiAug 6, 2014, 12:04 AM
91 points
39 comments3 min readLW link

Sor­ti­tion Model of Mo­ral Uncertainty

B JacobsOct 8, 2020, 5:44 PM
9 points
2 comments2 min readLW link

Is re­quires ought

jessicataOct 28, 2019, 2:36 AM
30 points
60 comments7 min readLW link
(unstableontology.com)

Whither Mo­ral Progress?

Eliezer YudkowskyJul 16, 2008, 5:04 AM
24 points
101 comments2 min readLW link

In­visi­ble Frameworks

Eliezer YudkowskyAug 22, 2008, 3:36 AM
27 points
47 comments6 min readLW link

(Mo­ral) Truth in Fic­tion?

Eliezer YudkowskyFeb 9, 2009, 5:26 PM
25 points
82 comments7 min readLW link

Just a re­minder: Scien­tists are, tech­ni­cally, peo­ple.

PhilGoetzMar 20, 2009, 8:33 PM
8 points
35 comments1 min readLW link

Aver­age util­i­tar­i­anism must be cor­rect?

PhilGoetzApr 6, 2009, 5:10 PM
5 points
168 comments3 min readLW link

My main prob­lem with utilitarianism

tawApr 17, 2009, 8:26 PM
−1 points
84 comments2 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDASApr 17, 2009, 8:01 AM
7 points
4 comments1 min readLW link

Wed­nes­day de­pends on us.

byrnemaApr 29, 2009, 3:47 AM
2 points
43 comments3 min readLW link

Fic­tion of interest

dclayhApr 29, 2009, 6:47 PM
14 points
16 comments1 min readLW link

Con­ven­tions and Con­fus­ing Con­ti­nu­ity Conundrums

Psy-KoshMay 1, 2009, 1:41 AM
5 points
9 comments1 min readLW link

The Sword of Good

Eliezer YudkowskySep 3, 2009, 12:53 AM
156 points
303 comments2 min readLW link

Es­say-Ques­tion Poll: Die­tary Choices

AlicornMay 3, 2009, 3:27 PM
17 points
244 comments1 min readLW link

Re­solv­ing moral un­cer­tainty with randomization

Sep 29, 2023, 11:23 AM
7 points
1 comment11 min readLW link

Re­vis­it­ing tor­ture vs. dust specks

cousin_itJul 8, 2009, 11:04 AM
10 points
66 comments2 min readLW link

The sailor’s wife

krbouchardFeb 27, 2021, 12:23 AM
4 points
2 comments2 min readLW link

Mo­ral Golems

Erich_GrunewaldApr 3, 2021, 10:12 AM
8 points
2 comments6 min readLW link
(www.erichgrunewald.com)

RFC: Meta-eth­i­cal un­cer­tainty in AGI alignment

Gordon Seidoh WorleyJun 8, 2018, 8:56 PM
16 points
6 comments3 min readLW link

Which an­i­mals re­al­ize which types of sub­jec­tive welfare?

MichaelStJulesFeb 27, 2024, 7:31 PM
4 points
0 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete CitriniSep 16, 2021, 4:13 PM
6 points
0 comments8 min readLW link

Mo­rally un­der­defined situ­a­tions can be deadly

Stuart_ArmstrongNov 22, 2021, 2:48 PM
17 points
8 comments2 min readLW link

Up­dated Defer­ence is not a strong ar­gu­ment against the util­ity un­cer­tainty ap­proach to alignment

Ivan VendrovJun 24, 2022, 7:32 PM
26 points
8 comments4 min readLW link

De­ci­sion mak­ing un­der model am­bi­guity, moral un­cer­tainty, and other agents with free will?

Jobst HeitzigNov 13, 2022, 12:50 PM
4 points
0 comments1 min readLW link
(forum.effectivealtruism.org)

Ly­ing to Save Humanity

cebsuvxNov 14, 2022, 11:04 PM
−1 points
4 comments1 min readLW link

Fair Col­lec­tive Effi­cient Altruism

Jobst HeitzigNov 25, 2022, 9:38 AM
2 points
1 comment5 min readLW link

A gen­tle apoc­a­lypse

pchvykovAug 16, 2021, 5:03 AM
3 points
5 comments3 min readLW link

The Case Against Mo­ral Realism

Zero ContradictionsNov 7, 2024, 10:14 AM
−5 points
10 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

Values Darwinism

pchvykovJan 22, 2024, 10:44 AM
11 points
13 comments3 min readLW link

Solu­tion to the two en­velopes prob­lem for moral weights

MichaelStJulesFeb 19, 2024, 12:15 AM
9 points
1 comment1 min readLW link

[Linkpost] Leif We­nar’s The Deaths of Effec­tive Altruism

ArdenMar 27, 2024, 7:17 PM
8 points
1 comment1 min readLW link
(www.wired.com)

Gra­da­tions of moral weight

MichaelStJulesFeb 29, 2024, 11:08 PM
1 point
0 comments1 min readLW link

Se­quence overview: Welfare and moral weights

MichaelStJulesAug 15, 2024, 4:22 AM
7 points
0 comments1 min readLW link

[Question] How to model un­cer­tainty about prefer­ences?

quetzal_rainbowMar 24, 2023, 7:04 PM
10 points
2 comments1 min readLW link

The Screen­play Method

Yeshua GodOct 24, 2023, 5:41 PM
−15 points
0 comments25 min readLW link

When is un­al­igned AI morally valuable?

paulfchristianoMay 25, 2018, 1:57 AM
81 points
53 comments10 min readLW link

Two-Tier Rationalism

AlicornApr 17, 2009, 7:44 PM
48 points
26 comments4 min readLW link

Real­ism and Rationality

bmgarfinkelSep 16, 2019, 3:09 AM
45 points
49 comments23 min readLW link

Pro­tected From Myself

Eliezer YudkowskyOct 19, 2008, 12:09 AM
48 points
30 comments6 min readLW link

For the past, in some ways only, we are moral degenerates

Stuart_ArmstrongJun 7, 2019, 3:57 PM
32 points
17 comments2 min readLW link

AI should be used to find bet­ter morality

JorterderJun 2, 2023, 8:38 PM
−21 points
1 comment1 min readLW link

Non­para­met­ric Ethics

Eliezer YudkowskyJun 20, 2009, 11:31 AM
40 points
60 comments5 min readLW link

Is the po­ten­tial as­tro­nom­i­cal waste in our uni­verse too small to care about?

Wei DaiOct 21, 2014, 8:44 AM
91 points
17 comments2 min readLW link

Hu­man er­rors, hu­man values

PhilGoetzApr 9, 2011, 2:50 AM
45 points
138 comments1 min readLW link

Univer­sal Eudaimonia

hg00Oct 5, 2020, 1:45 PM
19 points
6 comments2 min readLW link