RSS

Utility Functions

TagLast edit: 6 Apr 2023 17:32 UTC by the gears to ascension

A utility function assigns numerical values (“utilities”) to outcomes, in such a way that outcomes with higher utilities are absolutely always preferred to outcomes with lower utilities, with no exceptions; the lack of exploitable holes in the preference ordering is necessary for the definition and separates utility from mere reward.

See also: Complexity of Value, Decision Theory, Game Theory, Orthogonality Thesis, Utilitarianism, Preference, Utility, VNM Theorem

Utility Functions do not work very well in practice for individual humans. Human drives are not coherent nor is there any reason to think they would converge to a utility-function-grade level of reliability (Thou Art Godshatter), and even people with a strong interest in the concept have trouble working out what their utility function actually is even slightly (Post Your Utility Function). Furthermore, humans appear to calculate reward and loss separately—adding one to the other does not predict their behavior accurately, and thus human reward is not human utility. This makes humans highly exploitable—and in fact, not being exploitable would be a minimum requirement in order to qualify as having a coherent utility function.

pjeby posits humans’ difficulty in understanding their own utility functions as the root of akrasia.

However, utility functions can be a useful model for dealing with humans in groups, e.g. in economics.

The VNM Theorem tag is likely to be a strict subtag of the Utility Functions tag, because the VNM theorem establishes when preferences can be represented by a utility function, but a post discussing utility functions may or may not discuss the VNM theorem/​axioms.

Because utility functions arise from VNM rationality, they may still be of note in understanding intelligent systems even when the system does not explicitly store a utility function anywhere, since reducing exploitable error rate should eventually converge to utility-function-like guarantees.

Co­her­ent de­ci­sions im­ply con­sis­tent utilities

Eliezer Yudkowsky12 May 2019 21:33 UTC
149 points
81 comments26 min readLW link3 reviews

An Ortho­dox Case Against Utility Functions

abramdemski7 Apr 2020 19:18 UTC
155 points
65 comments8 min readLW link2 reviews

Co­her­ence ar­gu­ments do not en­tail goal-di­rected behavior

Rohin Shah3 Dec 2018 3:26 UTC
133 points
69 comments7 min readLW link3 reviews

Ap­prox­i­mately Bayesian Rea­son­ing: Knigh­tian Uncer­tainty, Good­hart, and the Look-Else­where Effect

RogerDearnaley26 Jan 2024 3:58 UTC
16 points
2 comments11 min readLW link

Bayesian Utility: Rep­re­sent­ing Prefer­ence by Prob­a­bil­ity Measures

Vladimir_Nesov27 Jul 2009 14:28 UTC
48 points
37 comments2 min readLW link

How eas­ily can we sep­a­rate a friendly AI in de­sign space from one which would bring about a hy­per­ex­is­ten­tial catas­tro­phe?

Anirandis10 Sep 2020 0:40 UTC
20 points
19 comments2 min readLW link

Utility ≠ Reward

Vlad Mikulik5 Sep 2019 17:28 UTC
130 points
24 comments1 min readLW link2 reviews

Why Not Subagents?

22 Jun 2023 22:16 UTC
130 points
47 comments14 min readLW link

Time and Effort Discounting

Scott Alexander7 Jul 2011 23:48 UTC
65 points
32 comments4 min readLW link

The Hu­man’s Hid­den Utility Func­tion (Maybe)

lukeprog23 Jan 2012 19:39 UTC
67 points
90 comments3 min readLW link

money ≠ value

stonefly30 Apr 2023 17:47 UTC
2 points
3 comments3 min readLW link

Orthog­o­nal­ity is expensive

beren3 Apr 2023 10:20 UTC
43 points
9 comments3 min readLW link

Co­her­ence ar­gu­ments im­ply a force for goal-di­rected behavior

KatjaGrace26 Mar 2021 16:10 UTC
91 points
25 comments11 min readLW link1 review
(aiimpacts.org)

The VNM in­de­pen­dence ax­iom ig­nores the value of information

kilobug2 Mar 2013 14:36 UTC
15 points
48 comments1 min readLW link

I’m no longer sure that I buy dutch book ar­gu­ments and this makes me skep­ti­cal of the “util­ity func­tion” abstraction

Eli Tyre22 Jun 2021 3:53 UTC
41 points
29 comments4 min readLW link

[Question] How do bounded util­ity func­tions work if you are un­cer­tain how close to the bound your util­ity is?

Ghatanathoah6 Oct 2021 21:31 UTC
13 points
26 comments2 min readLW link

The Fun­da­men­tal The­o­rem of As­set Pric­ing: Miss­ing Link of the Dutch Book Arguments

johnswentworth1 Jun 2019 20:34 UTC
42 points
5 comments3 min readLW link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

18 Nov 2021 22:19 UTC
130 points
61 comments39 min readLW link1 review

Con­se­quen­tial­ism & corrigibility

Steven Byrnes14 Dec 2021 13:23 UTC
66 points
27 comments7 min readLW link

The Allais Paradox

Eliezer Yudkowsky19 Jan 2008 3:05 UTC
62 points
145 comments3 min readLW link

LeCun says mak­ing a util­ity func­tion is intractable

Iknownothing28 Jun 2023 18:02 UTC
2 points
3 comments1 min readLW link

In­fer­ring util­ity func­tions from lo­cally non-tran­si­tive preferences

Jan10 Feb 2022 10:33 UTC
32 points
15 comments8 min readLW link
(universalprior.substack.com)

Up­dat­ing Utility Functions

9 May 2022 9:44 UTC
41 points
6 comments8 min readLW link

Deon­tol­ogy for Consequentialists

Alicorn30 Jan 2010 17:58 UTC
61 points
255 comments6 min readLW link

Think­ing about Broad Classes of Utility-like Functions

J Bostock7 Jun 2022 14:05 UTC
7 points
0 comments4 min readLW link

Game The­ory with­out Argmax [Part 1]

Cleo Nardo11 Nov 2023 15:59 UTC
69 points
18 comments19 min readLW link

Game The­ory with­out Argmax [Part 2]

Cleo Nardo11 Nov 2023 16:02 UTC
31 points
14 comments13 min readLW link

Stable Poin­t­ers to Value III: Re­cur­sive Quantilization

abramdemski21 Jul 2018 8:06 UTC
20 points
4 comments4 min readLW link

When do util­ity func­tions con­strain?

Hoagy23 Aug 2019 17:19 UTC
30 points
8 comments7 min readLW link

Shard The­ory: An Overview

David Udell11 Aug 2022 5:44 UTC
165 points
34 comments10 min readLW link

[Question] Why doesn’t the pres­ence of log-loss for prob­a­bil­is­tic mod­els (e.g. se­quence pre­dic­tion) im­ply that any util­ity func­tion ca­pa­ble of pro­duc­ing a “fairly ca­pa­ble” agent will have at least some non-neg­ligible frac­tion of over­lap with hu­man val­ues?

Thoth Hermes16 May 2023 18:02 UTC
2 points
0 comments1 min readLW link

Pin­point­ing Utility

[deleted]1 Feb 2013 3:58 UTC
94 points
156 comments13 min readLW link

[link] Choose your (prefer­ence) util­i­tar­i­anism care­fully – part 1

Kaj_Sotala25 Jun 2015 12:06 UTC
21 points
6 comments2 min readLW link

Com­par­ing Utilities

abramdemski14 Sep 2020 20:56 UTC
71 points
31 comments17 min readLW link

In­terthe­o­retic util­ity comparison

Stuart_Armstrong3 Jul 2018 13:44 UTC
23 points
11 comments6 min readLW link

If you don’t know the name of the game, just tell me what I mean to you

Stuart_Armstrong26 Oct 2010 13:43 UTC
16 points
26 comments5 min readLW link

Com­pu­ta­tional effi­ciency rea­sons not to model VNM-ra­tio­nal prefer­ence re­la­tions with util­ity functions

AlexMennen25 Jul 2018 2:11 UTC
16 points
5 comments3 min readLW link

[Question] Why The Fo­cus on Ex­pected Utility Max­imisers?

DragonGod27 Dec 2022 15:49 UTC
116 points
84 comments3 min readLW link

Per­son-mo­ment af­fect­ing views

KatjaGrace7 Mar 2018 2:30 UTC
17 points
8 comments5 min readLW link
(meteuphoric.wordpress.com)

Why Subagents?

johnswentworth1 Aug 2019 22:17 UTC
174 points
48 comments7 min readLW link1 review

To cap­ture anti-death in­tu­itions, in­clude mem­ory in utilitarianism

Kaj_Sotala15 Jan 2014 6:27 UTC
12 points
34 comments3 min readLW link

The Iso­la­tion As­sump­tion of Ex­pected Utility Maximization

Pedro Oliboni6 Aug 2020 4:05 UTC
7 points
1 comment5 min readLW link

Re­solv­ing von Neu­mann-Mor­gen­stern In­con­sis­tent Preferences

niplav22 Oct 2024 11:45 UTC
31 points
5 comments58 min readLW link

Distinc­tions when Dis­cussing Utility Functions

ozziegooen9 Mar 2024 20:14 UTC
24 points
7 comments1 min readLW link

An At­tempt at Prefer­ence Uncer­tainty Us­ing VNM

[deleted]16 Jul 2013 5:20 UTC
15 points
33 comments6 min readLW link

Choos­ing the Zero Point

orthonormal6 Apr 2020 23:44 UTC
173 points
24 comments3 min readLW link2 reviews

Differ­en­tial Op­ti­miza­tion Reframes and Gen­er­al­izes Utility-Maximization

J Bostock27 Dec 2023 1:54 UTC
30 points
2 comments3 min readLW link

Re­search Agenda v0.9: Syn­the­sis­ing a hu­man’s prefer­ences into a util­ity function

Stuart_Armstrong17 Jun 2019 17:46 UTC
70 points
26 comments33 min readLW link

Ap­ply­ing util­ity func­tions to hu­mans con­sid­ered harmful

Kaj_Sotala3 Feb 2010 19:22 UTC
36 points
116 comments5 min readLW link

Satis­ficers want to be­come maximisers

Stuart_Armstrong21 Oct 2011 16:27 UTC
38 points
70 comments1 min readLW link

Against util­ity functions

Qiaochu_Yuan19 Jun 2014 5:56 UTC
67 points
87 comments1 min readLW link

Us­ing ex­pected util­ity for Good(hart)

Stuart_Armstrong27 Aug 2018 3:32 UTC
42 points
5 comments4 min readLW link

We Are Less Wrong than E. T. Jaynes on Loss Func­tions in Hu­man Society

Zack_M_Davis5 Jun 2023 5:34 UTC
46 points
14 comments2 min readLW link

De­scrip­tive vs. speci­fi­able values

TsviBT26 Mar 2023 9:10 UTC
17 points
2 comments2 min readLW link

Na­ture < Nur­ture for AIs

scottviteri4 Jun 2023 20:38 UTC
14 points
22 comments7 min readLW link

Etho­dy­nam­ics of Omelas

dr_s10 Jun 2023 16:24 UTC
79 points
17 comments9 min readLW link

The Do­main of Your Utility Function

Peter_de_Blanc23 Jun 2009 4:58 UTC
42 points
99 comments2 min readLW link

Sim­plified prefer­ences needed; sim­plified prefer­ences sufficient

Stuart_Armstrong5 Mar 2019 19:39 UTC
33 points
6 comments3 min readLW link

Utility ver­sus Re­ward func­tion: par­tial equivalence

Stuart_Armstrong13 Apr 2018 14:58 UTC
18 points
5 comments5 min readLW link

The Prefer­ence Utili­tar­ian’s Time In­con­sis­tency Problem

Wei Dai15 Jan 2010 0:26 UTC
34 points
107 comments1 min readLW link

Com­plex Be­hav­ior from Sim­ple (Sub)Agents

moridinamael10 May 2019 21:44 UTC
113 points
13 comments9 min readLW link1 review

“Solv­ing” self­ish­ness for UDT

Stuart_Armstrong27 Oct 2014 17:51 UTC
39 points
52 comments8 min readLW link

Why Univer­sal Com­pa­ra­bil­ity of Utility?

AK13 May 2018 0:10 UTC
8 points
16 comments1 min readLW link

What we talk about when we talk about max­imis­ing utility

Richard_Ngo24 Feb 2018 22:33 UTC
14 points
18 comments4 min readLW link

VNM ex­pected util­ity the­ory: uses, abuses, and interpretation

Academian17 Apr 2010 20:23 UTC
36 points
51 comments10 min readLW link

Risk aver­sion vs. con­cave util­ity function

dvasya31 Jan 2012 6:25 UTC
3 points
35 comments3 min readLW link

Is the En­dow­ment Effect Due to In­com­pa­ra­bil­ity?

Kevin Dorst10 Jul 2023 16:26 UTC
21 points
10 comments7 min readLW link
(kevindorst.substack.com)

Univer­sal agents and util­ity functions

Anja14 Nov 2012 4:05 UTC
43 points
38 comments6 min readLW link

Harsanyi’s So­cial Ag­gre­ga­tion The­o­rem and what it means for CEV

AlexMennen5 Jan 2013 21:38 UTC
37 points
90 comments4 min readLW link

Ver­ify­ing vNM-ra­tio­nal­ity re­quires an ontology

jeyoor13 Mar 2019 0:03 UTC
25 points
5 comments1 min readLW link

Are pre-speci­fied util­ity func­tions about the real world pos­si­ble in prin­ci­ple?

mlogan11 Jul 2018 18:46 UTC
24 points
7 comments4 min readLW link

ACI#4: Seed AI is the new Per­pet­ual Mo­tion Machine

Akira Pyinya8 Jul 2023 1:17 UTC
−7 points
0 comments6 min readLW link

Align­ment, con­flict, powerseeking

Oliver Sourbut22 Nov 2023 9:47 UTC
6 points
1 comment1 min readLW link

Op­ti­mi­sa­tion Mea­sures: Desider­ata, Im­pos­si­bil­ity, Proposals

7 Aug 2023 15:52 UTC
35 points
9 comments1 min readLW link

The Case for Convexity

Jesse Richardson9 Aug 2023 14:09 UTC
19 points
3 comments1 min readLW link

Why the be­liefs/​val­ues di­chotomy?

Wei Dai20 Oct 2009 16:35 UTC
29 points
156 comments2 min readLW link

Knigh­tian Uncer­tainty and Am­bi­guity Aver­sion: Motivation

So8res21 Jul 2014 20:32 UTC
46 points
44 comments13 min readLW link

Con­cep­tual prob­lems with util­ity functions

Dacyn11 Jul 2018 1:29 UTC
22 points
12 comments2 min readLW link

Allais Malaise

Eliezer Yudkowsky21 Jan 2008 0:40 UTC
40 points
38 comments2 min readLW link

I’m con­fused. Could some­one help?

CronoDAS23 Mar 2009 5:26 UTC
1 point
12 comments1 min readLW link

On dol­lars, util­ity, and crack cocaine

PhilGoetz4 Apr 2009 0:00 UTC
16 points
100 comments2 min readLW link

Real-world ex­am­ples of money-pump­ing?

sixes_and_sevens25 Apr 2013 13:49 UTC
28 points
97 comments1 min readLW link

How Not to be Stupid: Adorable Maybes

Psy-Kosh29 Apr 2009 19:15 UTC
1 point
55 comments3 min readLW link

Ex­pected util­ity with­out the in­de­pen­dence axiom

Stuart_Armstrong28 Oct 2009 14:40 UTC
20 points
68 comments4 min readLW link

Allais Hack—Trans­form Your De­ci­sions!

MBlume3 May 2009 22:37 UTC
22 points
19 comments2 min readLW link

How Not to be Stupid: Brew­ing a Nice Cup of Utilitea

Psy-Kosh9 May 2009 8:14 UTC
2 points
17 comments6 min readLW link

Want­ing to Want

Alicorn16 May 2009 3:08 UTC
30 points
199 comments2 min readLW link

Ar­gu­ments for util­i­tar­i­anism are im­pos­si­bil­ity ar­gu­ments un­der un­bounded prospects

MichaelStJules7 Oct 2023 21:08 UTC
7 points
7 comments21 min readLW link

Ex­pected fu­til­ity for humans

Roko9 Jun 2009 12:04 UTC
14 points
53 comments3 min readLW link

If it looks like util­ity max­i­mizer and quacks like util­ity max­i­mizer...

taw11 Jun 2009 18:34 UTC
20 points
24 comments2 min readLW link

Utility Max­i­miza­tion = De­scrip­tion Length Minimization

johnswentworth18 Feb 2021 18:04 UTC
213 points
44 comments5 min readLW link

A fun­gi­bil­ity theorem

Nisan12 Jan 2013 9:27 UTC
35 points
66 comments6 min readLW link

Chas­ing Infinities

Michael Bateman16 Aug 2021 1:19 UTC
2 points
1 comment9 min readLW link

The Dou­bling Box

Mestroyer6 Aug 2012 5:50 UTC
22 points
84 comments3 min readLW link

Hous­ing Mar­kets, Satis­ficers, and One-Track Goodhart

J Bostock16 Dec 2021 21:38 UTC
2 points
2 comments2 min readLW link

[Question] Your Preferences

PeterL5 Jan 2022 18:49 UTC
1 point
4 comments1 min readLW link

Im­pos­si­bil­ity re­sults for un­bounded utilities

paulfchristiano2 Feb 2022 3:52 UTC
167 points
109 comments8 min readLW link1 review

The Unified The­ory of Nor­ma­tive Ethics

Thane Ruthenis17 Jun 2022 19:55 UTC
8 points
0 comments6 min readLW link

The “Mea­sur­ing Stick of Utility” Problem

johnswentworth25 May 2022 16:17 UTC
74 points
25 comments3 min readLW link

Adap­ta­tion Ex­ecu­tors and the Telos Margin

Plinthist20 Jun 2022 13:06 UTC
2 points
8 comments5 min readLW link

Re­in­force­ment Learner Wireheading

Nate Showell8 Jul 2022 5:32 UTC
8 points
2 comments3 min readLW link

Utility func­tions and prob­a­bil­ities are entangled

Thomas Kwa26 Jul 2022 5:36 UTC
15 points
5 comments1 min readLW link

A gen­tle primer on car­ing, in­clud­ing in strange senses, with applications

Kaarel30 Aug 2022 8:05 UTC
10 points
4 comments18 min readLW link

Bridg­ing Ex­pected Utility Max­i­miza­tion and Optimization

Whispermute5 Aug 2022 8:18 UTC
25 points
5 comments14 min readLW link

the In­su­lated Goal-Pro­gram idea

Tamsin Leake13 Aug 2022 9:57 UTC
46 points
4 comments2 min readLW link
(carado.moe)

goal-pro­gram bricks

Tamsin Leake13 Aug 2022 10:08 UTC
31 points
2 comments2 min readLW link
(carado.moe)

An Un­ex­pected GPT-3 De­ci­sion in a Sim­ple Gam­ble

hatta_afiq25 Sep 2022 16:46 UTC
8 points
4 comments1 min readLW link

Will Values and Com­pe­ti­tion De­cou­ple?

interstice28 Sep 2022 16:27 UTC
15 points
11 comments17 min readLW link

Why Bet Kelly?

Joe Zimmerman29 Nov 2022 18:47 UTC
16 points
4 comments4 min readLW link

Take 7: You should talk about “the hu­man’s util­ity func­tion” less.

Charlie Steiner8 Dec 2022 8:14 UTC
50 points
22 comments2 min readLW link

Thatcher’s Axiom

Edward P. Könings24 Jan 2023 22:35 UTC
10 points
22 comments4 min readLW link

The Lin­guis­tic Blind Spot of Value-Aligned Agency, Nat­u­ral and Ar­tifi­cial

Roman Leventov14 Feb 2023 6:57 UTC
6 points
0 comments2 min readLW link
(arxiv.org)

[Question] Math­e­mat­i­cal mod­els of Ethics

Victors8 Mar 2023 17:40 UTC
4 points
2 comments1 min readLW link

AI Align­ment 2018-19 Review

Rohin Shah28 Jan 2020 2:19 UTC
126 points
6 comments35 min readLW link

Sublimity vs. Youtube

Alicorn18 Mar 2011 5:33 UTC
33 points
63 comments1 min readLW link

[Question] Why does ex­pected util­ity mat­ter?

Marco Discendenti25 Dec 2023 14:47 UTC
18 points
21 comments4 min readLW link

Utility is relative

CrimsonChin8 Jan 2024 2:31 UTC
2 points
4 comments2 min readLW link

A Ped­a­gog­i­cal Guide to Corrigibility

A.H.17 Jan 2024 11:45 UTC
6 points
3 comments16 min readLW link

In­creas­ingly vague in­ter­per­sonal welfare comparisons

MichaelStJules1 Feb 2024 6:45 UTC
5 points
0 comments1 min readLW link

Types of sub­jec­tive welfare

MichaelStJules2 Feb 2024 9:56 UTC
10 points
3 comments1 min readLW link

Solu­tion to the two en­velopes prob­lem for moral weights

MichaelStJules19 Feb 2024 0:15 UTC
9 points
1 comment1 min readLW link

In­di­vi­d­ual Utilities Shift Con­tin­u­ously as Geo­met­ric Weights Shift

StrivingForLegibility7 Aug 2024 1:41 UTC
2 points
0 comments17 min readLW link

Gra­di­ent As­cen­ders Reach the Harsanyi Hyperplane

StrivingForLegibility7 Aug 2024 1:40 UTC
2 points
0 comments6 min readLW link

Deriv­ing the Geo­met­ric Utili­tar­ian Weights

StrivingForLegibility7 Aug 2024 1:39 UTC
2 points
0 comments11 min readLW link

Prov­ing the Geo­met­ric Utili­tar­ian Theorem

StrivingForLegibility7 Aug 2024 1:39 UTC
25 points
0 comments8 min readLW link

Geo­met­ric Utili­tar­i­anism (And Why It Mat­ters)

StrivingForLegibility12 May 2024 3:41 UTC
26 points
2 comments11 min readLW link

The Geo­met­ric Im­por­tance of Side Payments

StrivingForLegibility7 Aug 2024 1:38 UTC
8 points
4 comments3 min readLW link

Gra­da­tions of moral weight

MichaelStJules29 Feb 2024 23:08 UTC
1 point
0 comments1 min readLW link

The Im­pos­si­bil­ity of a Ra­tional In­tel­li­gence Optimizer

Nicolas Villarreal6 Jun 2024 16:14 UTC
−9 points
5 comments14 min readLW link

[Aspira­tion-based de­signs] A. Da­m­ages from mis­al­igned op­ti­miza­tion – two more models

15 Jul 2024 14:08 UTC
6 points
0 comments9 min readLW link

Se­quence overview: Welfare and moral weights

MichaelStJules15 Aug 2024 4:22 UTC
7 points
0 comments1 min readLW link

[Question] Do­ing Noth­ing Utility Function

k6426 Sep 2024 22:05 UTC
9 points
9 comments1 min readLW link

Galatea and the windup toy

Nicolas Villarreal26 Oct 2024 14:52 UTC
−4 points
0 comments13 min readLW link
(nicolasdvillarreal.substack.com)

Hu­mans are util­ity monsters

PhilGoetz16 Aug 2013 21:05 UTC
123 points
216 comments2 min readLW link

Free­dom Is All We Need

Leo Glisic27 Apr 2023 0:09 UTC
−1 points
8 comments10 min readLW link

(A Failed Ap­proach) From Prece­dent to Utility Function

Akira Pyinya29 Apr 2023 21:55 UTC
0 points
2 comments4 min readLW link

Agents which are EU-max­i­miz­ing as a group are not EU-max­i­miz­ing individually

Mlxa4 Dec 2023 18:49 UTC
3 points
2 comments2 min readLW link

A Brief In­tro­duc­tion to ACI, 3.5: How ACI im­proves Ac­tive Inference

Akira Pyinya29 Apr 2023 23:51 UTC
−7 points
0 comments2 min readLW link

[Question] “Do Noth­ing” util­ity func­tion, 3½ years later?

niplav20 Jul 2020 11:09 UTC
5 points
3 comments1 min readLW link

De­grees of Freedom

sarahconstantin2 Apr 2019 21:10 UTC
103 points
31 comments11 min readLW link
(srconstantin.wordpress.com)

Against the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Penalty

AlexMennen14 Dec 2017 18:38 UTC
41 points
47 comments11 min readLW link

Ter­mi­nal Values and In­stru­men­tal Values

Eliezer Yudkowsky15 Nov 2007 7:56 UTC
114 points
45 comments10 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei Dai5 Mar 2019 21:52 UTC
65 points
3 comments3 min readLW link

The ge­nie knows, but doesn’t care

Rob Bensinger6 Sep 2013 6:42 UTC
119 points
495 comments8 min readLW link

Buri­dan’s ass in co­or­di­na­tion games

jessicata16 Jul 2018 2:51 UTC
52 points
26 comments10 min readLW link

Pas­cal’s Mug­ging: Tiny Prob­a­bil­ities of Vast Utilities

Eliezer Yudkowsky19 Oct 2007 23:37 UTC
112 points
353 comments4 min readLW link

Pas­cal’s Mug­gle: In­finites­i­mal Pri­ors and Strong Evidence

Eliezer Yudkowsky8 May 2013 0:43 UTC
73 points
402 comments26 min readLW link

We Don’t Have a Utility Function

[deleted]2 Apr 2013 3:49 UTC
73 points
118 comments4 min readLW link

Prob­a­bil­ity is Real, and Value is Complex

abramdemski20 Jul 2018 5:24 UTC
79 points
20 comments6 min readLW link

The Lifes­pan Dilemma

Eliezer Yudkowsky10 Sep 2009 18:45 UTC
61 points
220 comments7 min readLW link

When to use quantilization

RyanCarey5 Feb 2019 17:17 UTC
65 points
5 comments4 min readLW link

Big Ad­vance in In­finite Ethics

bwest28 Nov 2017 15:10 UTC
32 points
13 comments5 min readLW link

Fake Utility Functions

Eliezer Yudkowsky6 Dec 2007 16:55 UTC
69 points
63 comments4 min readLW link

More on the Lin­ear Utility Hy­poth­e­sis and the Lev­er­age Prior

AlexMennen26 Feb 2018 23:53 UTC
16 points
4 comments9 min readLW link

Ex­pected util­ity, un­los­ing agents, and Pas­cal’s mugging

Stuart_Armstrong28 Jul 2014 18:05 UTC
32 points
54 comments5 min readLW link

ACI #3: The Ori­gin of Goals and Utility

Akira Pyinya17 May 2023 20:47 UTC
1 point
0 comments6 min readLW link

Is risk aver­sion re­ally ir­ra­tional ?

kilobug31 Jan 2012 20:34 UTC
54 points
65 comments9 min readLW link

Co­her­ent be­havi­our in the real world is an in­co­her­ent concept

Richard_Ngo11 Feb 2019 17:00 UTC
51 points
17 comments9 min readLW link

A sum­mary of Sav­age’s foun­da­tions for prob­a­bil­ity and util­ity.

Sniffnoy22 May 2011 19:56 UTC
81 points
91 comments13 min readLW link

Log­a­r­ithms and To­tal Utilitarianism

Pablo Villalobos9 Aug 2018 8:49 UTC
37 points
31 comments4 min readLW link

Ten­den­cies in re­flec­tive equilibrium

Scott Alexander20 Jul 2011 10:38 UTC
51 points
70 comments4 min readLW link

Un­der­ap­pre­ci­ated points about util­ity func­tions (of both sorts)

Sniffnoy4 Jan 2020 7:27 UTC
47 points
61 comments15 min readLW link

Zut Allais!

Eliezer Yudkowsky20 Jan 2008 3:18 UTC
57 points
51 comments6 min readLW link

Against Dis­count Rates

Eliezer Yudkowsky21 Jan 2008 10:00 UTC
37 points
81 comments2 min readLW link

Why you must max­i­mize ex­pected utility

Benya13 Dec 2012 1:11 UTC
50 points
76 comments21 min readLW link

Only hu­mans can have hu­man values

PhilGoetz26 Apr 2010 18:57 UTC
48 points
161 comments17 min readLW link

Post Your Utility Function

taw4 Jun 2009 5:05 UTC
38 points
280 comments1 min readLW link

What re­sources have in­creas­ing marginal util­ity?

Qiaochu_Yuan14 Jun 2014 3:43 UTC
59 points
63 comments1 min readLW link
No comments.