RSS

Consequentialism

TagLast edit: Apr 15, 2023, 6:16 PM by snerx

Consequentialism is the ethical theory that people should choose their actions based on the outcomes they expect will result. Particular frameworks of consequentialism specify how outcomes should be judged. For example, utilitarianism holds that the best outcome is that which maximizes the total welfare of all people, and ethical egoism holds that the best outcome is that which maximizes their own personal interests. Consequentialism is one of three main strands of ethical thought, along with deontology, which holds that people should choose actions based on the merit of the act itself, and virtue ethics, which holds that people should be judged by how virtuous they are, as an assessment of their entire history of actions.

Related: Ethics & Morality, Deontology, Moral Uncertainty, Utilitarianism

Consequentialism is often associated with maximizing the expected value of a utility function. However, it has been argued that consequentialism is not the same thing as having a utility function because it is possible to evaluate actions based on their consequences without obeying the von Neuman-Morgenstern axioms necessary for having a utility function, and because utility functions can also be used to implement moral theories similar to deontology.

Blog posts

External links

See also

References

Ends Don’t Jus­tify Means (Among Hu­mans)

Eliezer YudkowskyOct 14, 2008, 9:00 PM
203 points
97 comments4 min readLW link

What’s Up With Con­fus­ingly Per­va­sive Goal Direct­ed­ness?

RaemonJan 20, 2022, 7:22 PM
172 points
89 comments4 min readLW link

Con­se­quen­tial­ists: One-Way Pat­tern Traps

David UdellJan 16, 2023, 8:48 PM
59 points
3 comments14 min readLW link

A dis­cus­sion of nor­ma­tive ethics

Jan 9, 2024, 11:29 PM
10 points
6 comments25 min readLW link

Reflec­tive Consequentialism

Adam ZernerNov 18, 2022, 11:56 PM
21 points
14 comments4 min readLW link

Tor­ture vs. Dust Specks

Eliezer YudkowskyOct 30, 2007, 2:50 AM
84 points
630 comments1 min readLW link

Con­se­quen­tial­ism Need Not Be Nearsighted

orthonormalSep 2, 2011, 7:37 AM
83 points
119 comments5 min readLW link

Is it jus­tifi­able for non-ex­perts to have strong opinions about Gaza?

Jan 8, 2024, 5:31 PM
23 points
12 comments30 min readLW link

Re­solv­ing von Neu­mann-Mor­gen­stern In­con­sis­tent Preferences

niplavOct 22, 2024, 11:45 AM
38 points
5 comments58 min readLW link

On Responsibility

silentbobJan 21, 2025, 10:47 AM
9 points
2 comments6 min readLW link

Con­se­quen­tial­ism is in the Stars not Ourselves

DragonGodApr 24, 2023, 12:02 AM
7 points
19 comments5 min readLW link

Deon­tol­ogy for Consequentialists

AlicornJan 30, 2010, 5:58 PM
61 points
255 comments6 min readLW link

[link] Choose your (prefer­ence) util­i­tar­i­anism care­fully – part 1

Kaj_SotalaJun 25, 2015, 12:06 PM
21 points
6 comments2 min readLW link

To cap­ture anti-death in­tu­itions, in­clude mem­ory in utilitarianism

Kaj_SotalaJan 15, 2014, 6:27 AM
12 points
34 comments3 min readLW link

An­swer to Job

Scott AlexanderMar 15, 2015, 6:02 PM
73 points
7 comments4 min readLW link

Tran­shu­man­ism as Sim­plified Humanism

Eliezer YudkowskyDec 5, 2018, 8:12 PM
172 points
34 comments5 min readLW link

Are Deon­tolog­i­cal Mo­ral Judg­ments Ra­tion­al­iza­tions?

lukeprogAug 16, 2011, 4:40 PM
52 points
170 comments11 min readLW link

Re­view of Doris, ‘The Mo­ral Psy­chol­ogy Hand­book’ (2010)

lukeprogJun 26, 2011, 7:33 PM
24 points
10 comments5 min readLW link

The Very Repug­nant Conclusion

Stuart_ArmstrongJan 18, 2019, 2:26 PM
26 points
19 comments1 min readLW link

Antiantinatalism

Jacob FalkovichFeb 9, 2018, 4:49 PM
7 points
4 comments5 min readLW link

‘The Bat­tle for Com­pas­sion’: ethics in a world of ac­cel­er­at­ing change

lukeprogSep 11, 2011, 12:54 PM
5 points
3 comments1 min readLW link

The Mo­ral Sta­tus of In­de­pen­dent Iden­ti­cal Copies

Wei DaiNov 30, 2009, 11:41 PM
66 points
80 comments2 min readLW link

Per­son-mo­ment af­fect­ing views

KatjaGraceMar 7, 2018, 2:30 AM
17 points
8 comments5 min readLW link
(meteuphoric.wordpress.com)

To­tal­i­tar­ian eth­i­cal systems

BenquoMay 3, 2019, 7:35 PM
33 points
12 comments3 min readLW link
(benjaminrosshoffman.com)

Feel­ing Moral

Eliezer YudkowskyMar 11, 2015, 7:00 PM
47 points
8 comments3 min readLW link

It’s hard to use util­ity max­i­miza­tion to jus­tify cre­at­ing new sen­tient beings

dynomightOct 19, 2020, 7:45 PM
10 points
14 comments1 min readLW link
(dynomight.net)

Ex­trin­sic and In­trin­sic Mo­ral Frameworks

lsusrDec 11, 2021, 5:28 AM
14 points
5 comments2 min readLW link

[Question] How to trade­off util­ity and agency?

A RayJan 14, 2022, 1:33 AM
14 points
5 comments1 min readLW link

[ASoT] Search­ing for con­se­quen­tial­ist structure

leogaoMar 27, 2022, 7:09 PM
26 points
2 comments4 min readLW link

Deon­tol­ogy and virtue ethics as “effec­tive the­o­ries” of con­se­quen­tial­ist ethics

Jan_KulveitNov 17, 2022, 2:11 PM
68 points
9 comments1 min readLW link1 review

Mas­sive consequences

KatjaGraceFeb 7, 2021, 5:30 AM
23 points
15 comments1 min readLW link
(worldspiritsockpuppet.com)

Re­view: G.E.M. An­scombe’s “Modern Mo­ral Philos­o­phy”

David GrossFeb 20, 2022, 6:58 PM
24 points
3 comments5 min readLW link

Lan­guage Models can be Utility-Max­imis­ing Agents

Raymond DFeb 1, 2023, 6:13 PM
22 points
1 comment2 min readLW link

The Dilemma’s Dilemma

James Stephen BrownFeb 19, 2025, 11:50 PM
7 points
11 comments7 min readLW link
(nonzerosum.games)

Some rea­sons why a pre­dic­tor wants to be a consequentialist

Lauro LangoscoApr 15, 2022, 3:02 PM
23 points
16 comments5 min readLW link

The Unified The­ory of Nor­ma­tive Ethics

Thane RuthenisJun 17, 2022, 7:55 PM
8 points
0 comments6 min readLW link

A gen­tle primer on car­ing, in­clud­ing in strange senses, with applications

KaarelAug 30, 2022, 8:05 AM
10 points
4 comments18 min readLW link

Money: The Unit of Caring

Eliezer YudkowskyMar 31, 2009, 12:35 PM
218 points
132 comments4 min readLW link

The Ep­silon Fallacy

johnswentworthMar 17, 2018, 12:08 AM
92 points
21 comments7 min readLW link
(medium.com)

Should we max­i­mize the Geo­met­ric Ex­pec­ta­tion of Utility?

A.H.Apr 17, 2024, 10:37 AM
5 points
17 comments9 min readLW link

The Mere Cable Chan­nel Ad­di­tion Paradox

GhatanathoahJul 26, 2012, 7:20 AM
100 points
147 comments12 min readLW link

Shut Up and Divide?

Wei DaiFeb 9, 2010, 8:09 PM
114 points
276 comments1 min readLW link

One Life Against the World

Eliezer YudkowskyMay 18, 2007, 10:06 PM
125 points
84 comments3 min readLW link

Cir­cu­lar Altruism

Eliezer YudkowskyJan 22, 2008, 6:00 PM
87 points
310 comments4 min readLW link

The Lifes­pan Dilemma

Eliezer YudkowskySep 10, 2009, 6:45 PM
61 points
220 comments7 min readLW link

Non-Con­se­quen­tial­ist Co­op­er­a­tion?

abramdemskiJan 11, 2019, 9:15 AM
50 points
15 comments7 min readLW link

Wel­come to Heaven

denisbiderJan 25, 2010, 11:22 PM
26 points
246 comments2 min readLW link

Pin­point­ing Utility

[deleted]Feb 1, 2013, 3:58 AM
94 points
156 comments13 min readLW link

Two-Tier Rationalism

AlicornApr 17, 2009, 7:44 PM
48 points
26 comments4 min readLW link

Dialogue on Ap­peals to Consequences

jessicataJul 18, 2019, 2:34 AM
33 points
87 comments7 min readLW link
(unstableontology.com)

Log­a­r­ithms and To­tal Utilitarianism

Pablo VillalobosAug 9, 2018, 8:49 AM
37 points
31 comments4 min readLW link

[Question] How easy is it to su­per­vise pro­cesses vs out­comes?

Noosphere89Oct 18, 2022, 5:48 PM
3 points
0 comments1 min readLW link

Pain

AlicornAug 2, 2009, 7:12 PM
48 points
202 comments2 min readLW link

SotW: Check Consequentialism

Eliezer YudkowskyMar 29, 2012, 1:35 AM
58 points
309 comments7 min readLW link

A (small) cri­tique of to­tal utilitarianism

Stuart_ArmstrongJun 26, 2012, 12:36 PM
47 points
237 comments11 min readLW link

What sorts of sys­tems can be de­cep­tive?

Andrei AlexandruOct 31, 2022, 10:00 PM
16 points
0 comments7 min readLW link

Un­de­sir­able Con­clu­sions and Ori­gin Adjustment

JerdleFeb 19, 2025, 6:35 PM
3 points
0 comments5 min readLW link

Hell Must Be Destroyed

algekalipsoDec 6, 2018, 4:11 AM
33 points
1 comment4 min readLW link

Un­der­ap­pre­ci­ated points about util­ity func­tions (of both sorts)

SniffnoyJan 4, 2020, 7:27 AM
47 points
61 comments15 min readLW link

In Defence of Tem­po­ral Dis­count­ing in Longter­mist Ethics

DragonGodNov 13, 2022, 9:54 PM
25 points
4 comments1 min readLW link

The Prefer­ence Utili­tar­ian’s Time In­con­sis­tency Problem

Wei DaiJan 15, 2010, 12:26 AM
35 points
107 comments1 min readLW link

Con­se­quen­tial­ism FAQ

Scott AlexanderApr 26, 2011, 1:45 AM
39 points
124 comments1 min readLW link

Some reser­va­tions about Singer’s child-in-the-pond argument

JonahSJun 19, 2013, 11:54 PM
40 points
120 comments6 min readLW link

Hu­man er­rors, hu­man values

PhilGoetzApr 9, 2011, 2:50 AM
45 points
138 comments1 min readLW link

What we talk about when we talk about max­imis­ing utility

Richard_NgoFeb 24, 2018, 10:33 PM
14 points
18 comments4 min readLW link

Sublimity vs. Youtube

AlicornMar 18, 2011, 5:33 AM
33 points
63 comments1 min readLW link

[Question] Is ob­jec­tive moral­ity self-defeat­ing?

dialecticaJul 30, 2024, 6:23 PM
−4 points
3 comments2 min readLW link

An ar­gu­ment that con­se­quen­tial­ism is incomplete

cousin_itOct 7, 2024, 9:45 AM
33 points
27 comments1 min readLW link

[Question] Math­e­mat­i­cal mod­els of Ethics

VictorsMar 8, 2023, 5:40 PM
4 points
2 comments1 min readLW link

Which meat to eat: CO₂ vs An­i­mal suffering

B JacobsMar 6, 2025, 12:37 PM
2 points
2 comments3 min readLW link
(bobjacobs.substack.com)

[Question] How likely is AGI to force us all to be happy for­ever? (much like in the Three Wor­lds Col­lide novel)

uhbif19Jan 18, 2025, 3:39 PM
9 points
5 comments1 min readLW link

Why At­ti­tudes Matter

ozymandiasSep 21, 2017, 3:07 PM
19 points
5 comments4 min readLW link

The Hu­man Align­ment Prob­lem for AIs

rifeJan 22, 2025, 4:06 AM
10 points
5 comments3 min readLW link

Open-ended ethics of phe­nom­ena (a desider­ata with uni­ver­sal moral­ity)

Ryo Nov 8, 2023, 8:10 PM
1 point
0 comments8 min readLW link

The Hu­man-AI Reflec­tive Equilibrium

Allison DuettmannJan 24, 2023, 1:32 AM
22 points
1 comment24 min readLW link

A Brief In­tro­duc­tion to Al­gorith­mic Com­mon In­tel­li­gence, ACI . 1

Akira PyinyaApr 5, 2023, 5:43 AM
−2 points
1 comment2 min readLW link

A Defense of Utilitarianism

Pareto OptimalApr 6, 2023, 9:09 PM
−3 points
2 comments5 min readLW link
(paretooptimal.substack.com)

In­ter­nal Tar­get In­for­ma­tion for AI Oversight

Paul CologneseOct 20, 2023, 2:53 PM
15 points
0 comments5 min readLW link

Ex­pe­ri­en­tial­ist The­o­ries of Well-Being

andzuckFeb 19, 2021, 10:04 PM
18 points
1 comment11 min readLW link

Neo-Mohism

Bae's TheoremJun 16, 2021, 9:57 PM
5 points
11 comments7 min readLW link

Meta-Prefer­ence Utilitarianism

B JacobsFeb 4, 2020, 8:24 PM
10 points
30 comments1 min readLW link

​​ Open-ended/​Phenom­e­nal ​Ethics ​(TLDR)

Ryo Nov 9, 2023, 4:58 PM
3 points
0 comments1 min readLW link

Op­tion­al­ity ap­proach to ethics

Ryo Nov 13, 2023, 3:23 PM
7 points
2 comments3 min readLW link

Why Save The Drown­ing Child: Ethics Vs Theory

Raymond DNov 16, 2021, 7:07 PM
17 points
12 comments4 min readLW link

SBF’s com­ments on ethics are no sur­prise to virtue ethicists

c.troutDec 1, 2022, 4:18 AM
36 points
30 comments16 min readLW link

Disen­tan­gling Shard The­ory into Atomic Claims

Leon LangJan 13, 2023, 4:23 AM
86 points
6 comments18 min readLW link

Con­se­quen­tial­ism is a com­pass, not a judge

Neil Apr 13, 2024, 10:47 AM
26 points
6 comments2 min readLW link

In­tel­li­gence–Agency Equiv­alence ≈ Mass–En­ergy Equiv­alence: On Static Na­ture of In­tel­li­gence & Phys­i­cal­iza­tion of Ethics

ankFeb 22, 2025, 12:12 AM
1 point
0 comments6 min readLW link
No comments.