Yes Requires the Possibility of No
1. A group wants to try an activity that really requires a lot of group buy in. The activity will not work as well if there is doubt that everyone really wants to do it. They establish common knowledge of the need for buy in. They then have a group conversation in which several people make comments about how great the activity is and how much they want to do it. Everyone wants to do the activity, but is aware that if they did not want to do the activity, it would be awkward to admit. They do the activity. It goes poorly.
2. Alice strongly wants to believe A. She searches for evidence of A. She implements a biased search, ignoring evidence against A. She finds justifications for her conclusion. She can then point to the justifications, and tell herself that A is true. However, there is always this nagging thought in the back of her mind that maybe A is false. She never fully believes A as strongly as she would have believed it if she just implemented an an unbiased search, and found out that A was, in fact, true.
3. Bob wants Charlie to do a task for him. Bob phrases the request in a way that makes Charlie afraid to refuse. Charlie agrees to do the task. Charlie would have been happy to do the task otherwise, but now Charlie does the task while feeling resentful towards Bob for violating his consent.
4. Derek has an accomplishment. Others often talk about how great the accomplishment is. Derek has imposter syndrome and is unable to fully believe that the accomplishment is good. Part of this is due to a desire to appear humble, but part of it stems from Derek’s lack of self trust. Derek can see lots of pressures to believe that the accomplishment is good. Derek does not understand exactly how he thinks, and so is concerned that there might be a significant bias that could cause him to falsely conclude that the accomplishment is better than it is. Because of this he does not fully trust his inside view which says the accomplishment is good.
5. Eve is has an aversion to doing B. She wants to eliminate this aversion. She tries to do an internal double crux with herself. She identifies a rational part of herself who can obviously see that it is good to do B. She identifies another part of herself that is afraid of B. The rational part thinks the other part is stupid and can’t imagine being convinced that B is bad. The IDC fails, and Eve continues to have an aversion to B and internal conflict.
6. Frank’s job or relationship is largely dependent to his belief in C. Frank really wants to have true beliefs, and so tries to figure out what is true. He mostly concludes that C is true, but has lingering doubts. He is unsure if he would have been able to conclude C is false under all the external pressure.
7. George gets a lot of social benefits out of believing D. He believes D with probability 80%, and this is enough for the social benefits. He considers searching for evidence of D. He thinks searching for evidence will likely increase the probability to 90%, but it has a small probability of decreasing the probability to 10%. He values the social benefit quite a bit, and chooses not to search for evidence because he is afraid of the risk.
8. Harry sees lots of studies that conclude E. However, Harry also believes there is a systematic bias that makes studies that conclude E more likely to be published, accepted, and shared. Harry doubts E.
9. A bayesian wants to increase his probability of proposition F, and is afraid of decreasing the probability. Every time he tries to find a way to increase his probability, he runs into an immovable wall called the conservation of expected evidence. In order to increase his probability of F, he must risk decreasing it.
- Inside Views, Impostor Syndrome, and the Great LARP by 25 Sep 2023 16:08 UTC; 331 points) (
- Predictable updating about AI risk by 8 May 2023 21:53 UTC; 289 points) (
- “Carefully Bootstrapped Alignment” is organizationally hard by 17 Mar 2023 18:00 UTC; 261 points) (
- Mistakes with Conservation of Expected Evidence by 8 Jun 2019 23:07 UTC; 232 points) (
- Predictable updating about AI risk by 8 May 2023 22:05 UTC; 130 points) (EA Forum;
- Say Wrong Things by 24 May 2019 22:11 UTC; 115 points) (
- 4 Feb 2022 0:13 UTC; 114 points) 's comment on The best $5,800 I’ve ever donated (to pandemic prevention). by (EA Forum;
- Model, Care, Execution by 26 Jun 2023 4:05 UTC; 111 points) (
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Thinking About Filtered Evidence Is (Very!) Hard by 19 Mar 2020 23:20 UTC; 97 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- Sapient Algorithms by 17 Jul 2023 16:30 UTC; 80 points) (
- How to be skeptical about meditation/Buddhism by 1 May 2022 10:30 UTC; 78 points) (
- Which rationality posts are begging for further practical development? by 23 Jul 2023 22:22 UTC; 58 points) (
- Sunday October 25, 12:00PM (PT) — Scott Garrabrant on “Cartesian Frames” by 21 Oct 2020 3:27 UTC; 48 points) (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 45 points) (
- Locating My Eyes (Part 3 of “The Sense of Physical Necessity”) by 29 Feb 2024 3:09 UTC; 43 points) (
- Some constructions for proof-based cooperation without Löb by 21 Mar 2023 16:12 UTC; 43 points) (
- The 2021 Review Phase by 5 Jan 2023 7:12 UTC; 34 points) (
- 29 Sep 2019 13:09 UTC; 25 points) 's comment on Follow-Up to Petrov Day, 2019 by (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- 24 Nov 2022 7:04 UTC; 22 points) 's comment on Here’s the exit. by (
- Sunday July 12 — talks by Scott Garrabrant, Alexflint, alexei, Stuart_Armstrong by 8 Jul 2020 0:27 UTC; 19 points) (
- LW4EA: Yes Requires the Possibility of No by 4 Apr 2022 16:00 UTC; 13 points) (EA Forum;
- 8 May 2023 23:29 UTC; 12 points) 's comment on Predictable updating about AI risk by (EA Forum;
- 20 Jul 2019 15:08 UTC; 11 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- 9 Jul 2019 17:22 UTC; 7 points) 's comment on The Results of My First LessWrong-inspired I Ching Divination by (
- 3 Nov 2021 10:51 UTC; 7 points) 's comment on Money Stuff by (
- 22 Apr 2023 20:48 UTC; 6 points) 's comment on Moderation notes re: recent Said/Duncan threads by (
- 22 Nov 2023 19:02 UTC; 6 points) 's comment on Dialogue on the Claim: “OpenAI’s Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI” by (
- 5 May 2023 0:47 UTC; 5 points) 's comment on An Update On The Campaign For AI Safety Dot Org by (
- 30 Dec 2020 3:46 UTC; 5 points) 's comment on Review Voting Thread by (
- 4 Sep 2022 5:33 UTC; 4 points) 's comment on Reasons I’ve been hesitant about high levels of near-ish AI risk by (EA Forum;
- Let Values Drift by 20 Jun 2019 20:45 UTC; 4 points) (
- 26 Feb 2021 6:43 UTC; 4 points) 's comment on What are a rationalist’s best tools for better decision making? by (
- 22 May 2023 22:13 UTC; 4 points) 's comment on I don’t want to talk about AI by (
- 29 Jul 2022 17:53 UTC; 4 points) 's comment on Which singularity schools plus the no singularity school was right? by (
- 3 Nov 2021 11:44 UTC; 2 points) 's comment on Money Stuff by (
This falls under the category of “things that is good to have a marker to point at and does a much better job than any previous marker.” It also made me more aware of this principle, and caused me to get more explicit about ensuring the possibility of no in some interactions even though it was socially slightly awkward.
Yes requiring the possibility of no has been something I’ve intuitively been aware of in social situations (anywhere where one could claim “you would have said that anyway”).
This post does a good job of applying more examples and consequences of this (the examples cover a wide range of decisions), and tying to to the mathematical law of conservation of evidence.
Adding a nomination, this is also a phrase I regularly use.
I see this concept referenced a lot and would be happy if it were spread more widely. The post is also clear and concise.
in Skill: The Map is Not the Territory Eliezer wrote:
That’s exactly how I’ve felt when i read this post. it’s such a fundamental principle, which I’ve used so much even before reading the post, that it was amazing to me that this phrase has been uttered only as late as 2019.
I also remember thinking that someone must have already said it somewhere, and not being able to find any other instance online.
It’s a principle and a phrase i still use a lot (and probably even more since the post), so i think this post deserves all the nominations it got.
This is one of those posts that crystallizes a thing well enough that you want to remember it and link to it as shorthand. Though I do think most of the top-tier rationalists already understood this concept prior to 2019, and were applying it, without necessarily having a good name or crystallization. This seems like a more accessible, easier-to-connect and slightly more general version of the concept of conservation of expected evidence.