RSS

Co­her­ence Arguments

Tag

There are no co­her­ence theorems

20 Feb 2023 21:25 UTC
145 points
125 comments19 min readLW link1 review

Co­her­ent de­ci­sions im­ply con­sis­tent utilities

Eliezer Yudkowsky12 May 2019 21:33 UTC
149 points
81 comments26 min readLW link3 reviews

Co­her­ence ar­gu­ments do not en­tail goal-di­rected behavior

Rohin Shah3 Dec 2018 3:26 UTC
133 points
69 comments7 min readLW link3 reviews

[Question] What do co­her­ence ar­gu­ments ac­tu­ally prove about agen­tic be­hav­ior?

sunwillrise1 Jun 2024 9:37 UTC
123 points
35 comments6 min readLW link

Co­her­ence ar­gu­ments im­ply a force for goal-di­rected behavior

KatjaGrace26 Mar 2021 16:10 UTC
91 points
25 comments11 min readLW link1 review
(aiimpacts.org)

Co­her­ence of Caches and Agents

johnswentworth1 Apr 2024 23:04 UTC
76 points
9 comments11 min readLW link

A Sim­ple Toy Co­her­ence Theorem

2 Aug 2024 17:47 UTC
74 points
20 comments7 min readLW link

When Most VNM-Co­her­ent Prefer­ence Order­ings Have Con­ver­gent In­stru­men­tal Incentives

TurnTrout9 Aug 2021 17:22 UTC
53 points
4 comments5 min readLW link

Count­ing-down vs. count­ing-up coherence

TsviBT27 Feb 2023 14:59 UTC
29 points
4 comments13 min readLW link

Con­tra “Strong Co­her­ence”

DragonGod4 Mar 2023 20:05 UTC
39 points
24 comments1 min readLW link

[Question] Is “Strong Co­her­ence” Anti-Nat­u­ral?

DragonGod11 Apr 2023 6:22 UTC
23 points
25 comments2 min readLW link

[Question] Money Pump Ar­gu­ments as­sume Me­moryless Agents. Isn’t this Un­re­al­is­tic?

Dalcy16 Aug 2024 4:16 UTC
23 points
6 comments1 min readLW link

The Im­pos­si­bil­ity of a Ra­tional In­tel­li­gence Optimizer

Nicolas Villarreal6 Jun 2024 16:14 UTC
−9 points
5 comments14 min readLW link

Three ways that “Suffi­ciently op­ti­mized agents ap­pear co­her­ent” can be false

Wei Dai5 Mar 2019 21:52 UTC
65 points
3 comments3 min readLW link

Com­ment on Co­her­ence ar­gu­ments do not im­ply goal di­rected behavior

Ronny Fernandez6 Dec 2019 9:30 UTC
30 points
8 comments5 min readLW link

[Question] Is there a “co­her­ent de­ci­sions im­ply con­sis­tent util­ities”-style ar­gu­ment for non-lex­i­co­graphic prefer­ences?

Tetraspace29 Jun 2021 19:14 UTC
4 points
20 comments1 min readLW link

The hot mess the­ory of AI mis­al­ign­ment: More in­tel­li­gent agents be­have less coherently

Jonathan Yan10 Mar 2023 0:20 UTC
47 points
21 comments1 min readLW link
(sohl-dickstein.github.io)

Deriv­ing Con­di­tional Ex­pected Utility from Pareto-Effi­cient Decisions

Thomas Kwa5 May 2022 3:21 UTC
24 points
1 comment6 min readLW link

The “Mea­sur­ing Stick of Utility” Problem

johnswentworth25 May 2022 16:17 UTC
74 points
25 comments3 min readLW link

[Question] Why The Fo­cus on Ex­pected Utility Max­imisers?

DragonGod27 Dec 2022 15:49 UTC
116 points
84 comments3 min readLW link

[Re­quest for Distil­la­tion] Co­her­ence of Distributed De­ci­sions With Differ­ent In­puts Im­plies Conditioning

johnswentworth25 Apr 2022 17:01 UTC
22 points
14 comments2 min readLW link

Mea­sur­ing Co­her­ence and Goal-Direct­ed­ness in RL Policies

dx2622 Apr 2024 18:26 UTC
10 points
0 comments7 min readLW link

Co­her­ent be­havi­our in the real world is an in­co­her­ent concept

Richard_Ngo11 Feb 2019 17:00 UTC
51 points
17 comments9 min readLW link

Let’s look for co­her­ence theorems

Valdes7 May 2023 14:45 UTC
25 points
18 comments6 min readLW link

[Linkpost] Will AI avoid ex­ploita­tion?

cdkg6 Aug 2023 14:28 UTC
22 points
1 comment1 min readLW link

Do in­co­her­ent en­tities have stronger rea­son to be­come more co­her­ent than less?

KatjaGrace30 Jun 2021 5:50 UTC
46 points
5 comments4 min readLW link
(worldspiritsockpuppet.com)

It Can’t Be Mesa-Op­ti­miz­ers All The Way Down (Or Else It Can’t Be Long-Term Su­per­co­her­ence?)

Austin Witte31 Mar 2023 7:21 UTC
20 points
5 comments4 min readLW link
No comments.