RSS

Metaethics

TagLast edit: Feb 19, 2025, 10:08 PM by RobertM

Metaethics asks “What kind of stuff is goodness made of?” (or “How would we compute goodness?”) rather than “Which particular policies or outcomes are good or not-good?” (In other words, it asks meta-level questions about ethics, rather than object-level moral questions like, “Ought I to steal from banks in order to give the money to the deserving poor?”)

Metaethics is one of the three branches of ethics usually recognized by philosophers, the others being normative ethics and applied ethics.

Metaethics on LessWrong

Eliezer Yudkowsky wrote a Sequence about metaethics, the Metaethics sequence, which Yudkowsky worried failed to convey his central point (this post by Luke tried to clarify); he approached the same problem again from a different angle in Highly Advanced Epistemology 101 for Beginners. From a standard philosophical standpoint, Yudkowsky’s philosophy is closest to Frank Jackson’s moral functionalism /​ analytic descriptivism; Yudkowsky could be loosely characterized as moral cognitivist—someone who believes moral sentences are either true or false—but not a moral realist—thus denying that moral sentences refer to facts about the world. Yudkowsky believes that moral cognition in any single human is at least potentially about a subject matter that is ‘logical’ in the sense that its semantics can be pinned down by axioms, and hence that moral cognition can bear truth-values; also that human beings both using similar words like “morality” can be talking about highly overlapping subject matter; but not that all possible minds would find the truths about this subject matter to be psychologically compelling.

Luke Muehlhauser has written a sequence, No-Nonsense Metaethics, where he claims that many of the questions of metaethics can be answered today using modern neuroscience and rationality. He explains how conventional metaethics or “Austere Metaethics” is capable of, after assuming a definition of ‘right’, choosing the right action given a situation—but useless without assuming some criteria for ‘right’. He proposes instead “Empathic Metaethics” which utilizes your underlying cognitive algorithms to understand what you think ‘right’ means, helps clarify any emotional and cognitive contradictions in it, and then tells you what the right thing to do is, according to your definition of right. This approach is highly relevant for the Friendly AI problem as a way of defining human-like goals and motivations when designing AIs.

Further Reading & References

See Also

By Which It May Be Judged

Eliezer YudkowskyDec 10, 2012, 4:26 AM
95 points
941 comments11 min readLW link

Mo­ral un­cer­tainty vs re­lated concepts

MichaelAJan 11, 2020, 10:03 AM
26 points
13 comments16 min readLW link

The Value Defi­ni­tion Problem

Sammy MartinNov 18, 2019, 7:56 PM
15 points
6 comments11 min readLW link

What is Eliezer Yud­kowsky’s meta-eth­i­cal the­ory?

lukeprogJan 29, 2011, 7:58 PM
51 points
375 comments1 min readLW link

Deon­tol­ogy for Consequentialists

AlicornJan 30, 2010, 5:58 PM
61 points
255 comments6 min readLW link

Mo­ral un­cer­tainty: What kind of ‘should’ is in­volved?

MichaelAJan 13, 2020, 12:13 PM
14 points
11 comments13 min readLW link

Con­cep­tual Anal­y­sis and Mo­ral Theory

lukeprogMay 16, 2011, 6:28 AM
95 points
481 comments8 min readLW link

Real­ism and Rationality

bmgarfinkelSep 16, 2019, 3:09 AM
45 points
50 comments23 min readLW link

Ex­is­ten­tial Angst Factory

Eliezer YudkowskyJul 19, 2008, 6:55 AM
78 points
100 comments4 min readLW link

Why didn’t peo­ple (ap­par­ently?) un­der­stand the metaethics se­quence?

ChrisHallquistOct 29, 2013, 11:04 PM
23 points
231 comments1 min readLW link

Could Any­thing Be Right?

Eliezer YudkowskyJul 18, 2008, 7:19 AM
75 points
39 comments6 min readLW link

Can we make peace with moral in­de­ter­mi­nacy?

Charlie SteinerOct 3, 2019, 12:56 PM
16 points
8 comments4 min readLW link

Meta-prefer­ences two ways: gen­er­a­tor vs. patch

Charlie SteinerApr 1, 2020, 12:51 AM
18 points
0 comments2 min readLW link

Is Mo­ral­ity Given?

Eliezer YudkowskyJul 6, 2008, 8:12 AM
35 points
100 comments8 min readLW link

Log­i­cal Foun­da­tions of Govern­ment Policy

FCCCOct 10, 2020, 5:05 PM
2 points
0 comments17 min readLW link

De­con­fus­ing Hu­man Values Re­search Agenda v1

Gordon Seidoh WorleyMar 23, 2020, 4:25 PM
28 points
12 comments4 min readLW link

[Valence se­ries] 2. Valence & Normativity

Steven ByrnesDec 7, 2023, 4:43 PM
88 points
7 comments28 min readLW link1 review

The AGI Op­ti­mist’s Dilemma

kaputmiFeb 23, 2023, 8:20 PM
−6 points
1 comment1 min readLW link

Three Wor­lds Col­lide (0/​8)

Eliezer YudkowskyJan 30, 2009, 12:07 PM
100 points
97 comments1 min readLW link

Nor­ma­tivity and Meta-Philosophy

Wei DaiApr 23, 2013, 8:35 PM
29 points
56 comments1 min readLW link

Chang­ing Your Metaethics

Eliezer YudkowskyJul 27, 2008, 12:36 PM
64 points
20 comments5 min readLW link

The Mean­ing of Right

Eliezer YudkowskyJul 29, 2008, 1:28 AM
61 points
156 comments23 min readLW link

A the­ory of hu­man values

Stuart_ArmstrongMar 13, 2019, 3:22 PM
28 points
13 comments7 min readLW link

25 Min Talk on Me­taEth­i­cal.AI with Ques­tions from Stu­art Armstrong

June KuApr 29, 2021, 3:38 PM
21 points
7 comments1 min readLW link

Im­pli­ca­tions of Mo­ral Real­ism on AI Safety

Myles HJan 2, 2025, 2:58 AM
7 points
1 comment3 min readLW link

Re­sponse to nos­talge­braist: proudly wav­ing my moral-an­tire­al­ist bat­tle flag

Steven ByrnesMay 29, 2024, 4:48 PM
103 points
29 comments11 min readLW link

Su­per­in­tel­li­gence Can’t Solve the Prob­lem of De­cid­ing What You’ll Do

Vladimir_NesovSep 15, 2024, 9:03 PM
27 points
11 comments1 min readLW link

Quick thoughts on em­pathic metaethics

lukeprogDec 12, 2017, 9:46 PM
29 points
0 comments9 min readLW link

The Cat­e­gor­i­cal Im­per­a­tive Obscures

Gordon Seidoh WorleyDec 6, 2022, 5:48 PM
17 points
17 comments2 min readLW link

Against meta-eth­i­cal hedonism

Joe CarlsmithDec 2, 2022, 12:23 AM
24 points
4 comments35 min readLW link

Ar­gu­ments for moral indefinability

Richard_NgoSep 30, 2023, 10:40 PM
47 points
16 comments7 min readLW link
(www.thinkingcomplete.com)

Fun­da­men­tal Uncer­tainty: Chap­ter 3 - Why don’t we agree on what’s right?

Gordon Seidoh WorleyJun 25, 2022, 5:50 PM
27 points
22 comments14 min readLW link

Op­tion­al­ity ap­proach to ethics

Ryo Nov 13, 2023, 3:23 PM
7 points
2 comments3 min readLW link

Re­ac­tion to “Em­pow­er­ment is (al­most) All We Need” : an open-ended alternative

Ryo Nov 25, 2023, 3:35 PM
9 points
3 comments5 min readLW link

Six Plau­si­ble Meta-Eth­i­cal Alternatives

Wei DaiAug 6, 2014, 12:04 AM
92 points
39 comments3 min readLW link

The Ur­gent Meta-Ethics of Friendly Ar­tifi­cial Intelligence

lukeprogFeb 1, 2011, 2:15 PM
75 points
252 comments1 min readLW link

Mo­ral Real­ity Check (a short story)

jessicataNov 26, 2023, 5:03 AM
149 points
45 comments21 min readLW link1 review
(unstableontology.com)

Plu­ral­is­tic Mo­ral Reductionism

lukeprogJun 1, 2011, 12:59 AM
64 points
327 comments15 min readLW link

No Univer­sally Com­pel­ling Arguments

Eliezer YudkowskyJun 26, 2008, 8:29 AM
89 points
58 comments5 min readLW link

The Sheer Folly of Cal­low Youth

Eliezer YudkowskySep 19, 2008, 1:30 AM
89 points
18 comments7 min readLW link

Set­ting Up Metaethics

Eliezer YudkowskyJul 28, 2008, 2:25 AM
27 points
34 comments4 min readLW link

Causal­ity and Mo­ral Responsibility

Eliezer YudkowskyJun 13, 2008, 8:34 AM
55 points
55 comments5 min readLW link

You Prov­ably Can’t Trust Yourself

Eliezer YudkowskyAug 19, 2008, 8:35 PM
49 points
19 comments6 min readLW link

Crit­i­cism of Eliezer’s ir­ra­tional moral beliefs

JorterderJun 16, 2023, 8:47 PM
−17 points
21 comments1 min readLW link

Ele­ments of Com­pu­ta­tional Philos­o­phy, Vol. I: Truth

Jul 1, 2023, 11:44 AM
12 points
6 comments1 min readLW link
(compphil.github.io)

Philo­soph­i­cal self-ratification

jessicataFeb 3, 2020, 10:48 PM
23 points
13 comments5 min readLW link
(unstableontology.com)

Mir­rors and Paintings

Eliezer YudkowskyAug 23, 2008, 12:29 AM
29 points
42 comments8 min readLW link

The Be­drock of Fairness

Eliezer YudkowskyJul 3, 2008, 6:00 AM
58 points
103 comments5 min readLW link

In­visi­ble Frameworks

Eliezer YudkowskyAug 22, 2008, 3:36 AM
27 points
47 comments6 min readLW link

Ethics Notes

Eliezer YudkowskyOct 21, 2008, 9:57 PM
20 points
46 comments11 min readLW link

While we’re on the sub­ject of meta-ethics...

CronoDASApr 17, 2009, 8:01 AM
7 points
4 comments1 min readLW link

[LINK] Scott Aaron­son on Google, Break­ing Cir­cu­lar­ity and Eigenmorality

ShmiJun 19, 2014, 8:17 PM
31 points
46 comments1 min readLW link

What are the lef­tover ques­tions of metaethics?

cousin_itApr 28, 2011, 8:46 AM
30 points
55 comments1 min readLW link

Mo­ral­ity is Awesome

[deleted]Jan 6, 2013, 3:21 PM
146 points
437 comments3 min readLW link

Mo­ral Golems

Erich_GrunewaldApr 3, 2021, 10:12 AM
8 points
2 comments6 min readLW link
(www.erichgrunewald.com)

Neo-Mohism

Bae's TheoremJun 16, 2021, 9:57 PM
5 points
11 comments7 min readLW link

[Question] How can there be a god­less moral world ?

momom2Jun 21, 2021, 12:34 PM
7 points
79 comments1 min readLW link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete CitriniSep 16, 2021, 4:13 PM
6 points
0 comments8 min readLW link

[Book Re­view] “Suffer­ing-fo­cused Ethics” by Mag­nus Vinding

KStubDec 28, 2021, 5:58 AM
15 points
3 comments24 min readLW link

Re­view: G.E.M. An­scombe’s “Modern Mo­ral Philos­o­phy”

David GrossFeb 20, 2022, 6:58 PM
24 points
3 comments5 min readLW link

What Should AI Owe To Us? Ac­countable and Aligned AI Sys­tems via Con­trac­tu­al­ist AI Alignment

xuanSep 8, 2022, 3:04 PM
26 points
16 comments25 min readLW link

Two Dog­mas of LessWrong

omnizoidDec 15, 2022, 5:56 PM
−7 points
155 comments69 min readLW link

Reflec­tion Mechanisms as an Align­ment Tar­get—At­ti­tudes on “near-term” AI

Mar 2, 2023, 4:29 AM
21 points
0 comments8 min readLW link

Mo­ral­ity vs re­lated concepts

MichaelAJan 7, 2020, 10:47 AM
26 points
17 comments8 min readLW link

Mo­ral Complexities

Eliezer YudkowskyJul 4, 2008, 6:43 AM
31 points
40 comments1 min readLW link

Insep­a­rably Right; or, Joy in the Merely Good

Eliezer YudkowskyAug 9, 2008, 1:00 AM
57 points
33 comments4 min readLW link

RFC: Meta-eth­i­cal un­cer­tainty in AGI alignment

Gordon Seidoh WorleyJun 8, 2018, 8:56 PM
16 points
6 comments3 min readLW link

Solv­ing For Meta-Ethics By In­duc­ing From The Self

VisionaryHeraJan 20, 2023, 7:21 AM
4 points
1 comment9 min readLW link

What kind of place is this?

Jim PivarskiFeb 25, 2023, 2:14 AM
24 points
24 comments8 min readLW link

[Question] Math­e­mat­i­cal mod­els of Ethics

VictorsMar 8, 2023, 5:40 PM
4 points
2 comments1 min readLW link

Value Plu­ral­ism and AI

Göran CrafteMar 19, 2023, 11:38 PM
8 points
4 comments2 min readLW link

Head­ing Toward: No-Non­sense Metaethics

lukeprogApr 24, 2011, 12:42 AM
55 points
60 comments2 min readLW link

A Bench­mark for De­ci­sion Theories

StrivingForLegibilityJan 11, 2024, 6:54 PM
10 points
0 comments2 min readLW link

Why small phe­nomenons are rele­vant to moral­ity ​

Ryo Nov 13, 2023, 3:25 PM
1 point
0 comments3 min readLW link

The Mo­ral Void

Eliezer YudkowskyJun 30, 2008, 8:52 AM
79 points
111 comments4 min readLW link

Created Already In Motion

Eliezer YudkowskyJul 1, 2008, 6:03 AM
91 points
23 comments3 min readLW link

On Ob­jec­tive Ethics, and a bit about boats

EndlessBlueMay 31, 2023, 11:40 AM
−7 points
3 comments2 min readLW link

Whither Mo­ral Progress?

Eliezer YudkowskyJul 16, 2008, 5:04 AM
24 points
101 comments2 min readLW link

Mo­ral Er­ror and Mo­ral Disagreement

Eliezer YudkowskyAug 10, 2008, 11:32 PM
26 points
133 comments6 min readLW link

In­ner Goodness

Eliezer YudkowskyOct 23, 2008, 10:19 PM
27 points
31 comments7 min readLW link

Is Fair­ness Ar­bi­trary?

Eliezer YudkowskyAug 14, 2008, 1:54 AM
9 points
37 comments6 min readLW link

Re­solv­ing moral un­cer­tainty with randomization

Sep 29, 2023, 11:23 AM
7 points
1 comment11 min readLW link

RFC: Philo­soph­i­cal Con­ser­vatism in AI Align­ment Research

Gordon Seidoh WorleyMay 15, 2018, 3:29 AM
17 points
13 comments1 min readLW link

Reflec­tion Mechanisms as an Align­ment tar­get: A survey

Jun 22, 2022, 3:05 PM
32 points
1 comment14 min readLW link

Against the nor­ma­tive re­al­ist’s wager

Joe CarlsmithOct 13, 2022, 4:35 PM
16 points
9 comments23 min readLW link

Ques­tions about Value Lock-in, Pa­ter­nal­ism, and Empowerment

Sam F. BrownNov 16, 2022, 3:33 PM
13 points
2 comments12 min readLW link
(sambrown.eu)

What can thought-ex­per­i­ments do?

Cleo NardoJan 17, 2023, 12:35 AM
16 points
3 comments5 min readLW link

Ra­tional Utopia & Nar­row Way There: Mul­tiver­sal AI Align­ment, Non-Agen­tic Static Place AI, New Ethics… (V. 4)

ankFeb 11, 2025, 3:21 AM
13 points
8 comments35 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo Nov 8, 2023, 8:10 PM
1 point
0 comments8 min readLW link

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ankFeb 22, 2025, 12:12 AM
1 point
0 comments6 min readLW link

As We May Align

Gilbert CDec 20, 2024, 7:02 PM
−1 points
0 comments6 min readLW link

What is Mo­ral­ity?

Zero ContradictionsJul 29, 2024, 7:19 PM
−1 points
0 comments1 min readLW link
(thewaywardaxolotl.blogspot.com)

[Question] Is ob­jec­tive moral­ity self-defeat­ing?

dialecticaJul 30, 2024, 6:23 PM
−4 points
3 comments2 min readLW link

Karpenchuk’s The­ory: Hu­man Life as a Si­mu­la­tion for Con­scious­ness Development

Karpenchuk Bohdan Aug 2, 2024, 12:03 AM
1 point
0 comments2 min readLW link

Wager­ing on Will And Worth (Pas­cals Wager for Free Will and Value)

Robert CousineauNov 27, 2024, 12:43 AM
−1 points
2 comments3 min readLW link

Im­pli­ca­tions—How Con­scious Sig­nifi­cance Could In­form Our lives

James Stephen BrownNov 26, 2024, 5:42 PM
7 points
0 comments13 min readLW link

My un­bundling of morality

Rudi CDec 30, 2020, 3:19 PM
7 points
2 comments1 min readLW link

AI Align­ment, Philo­soph­i­cal Plu­ral­ism, and the Rele­vance of Non-Western Philosophy

xuanJan 1, 2021, 12:08 AM
31 points
21 comments20 min readLW link

Value uncertainty

MichaelAJan 29, 2020, 8:16 PM
20 points
3 comments14 min readLW link

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo Nov 9, 2023, 4:58 PM
3 points
0 comments1 min readLW link
No comments.