The Onion Test for Personal and Institutional Honesty
[co-written by Chana Messinger and Andrew Critch, Andrew is the originator of the idea]
You (or your organization or your mission or your family or etc.) pass the “onion test” for honesty if each layer hides but does not mislead about the information hidden within.
When people get to know you better, or rise higher in your organization, they may find out new things, but should not be shocked by the types of information that were hidden. If they are, you messed up in creating the outer layers to describe appropriately the kind-of-thing that might be inside.
Examples
Positive Example:
Outer layer says “I usually treat my health information as private.”
Next layer in says: “Here are the specific health problems I have: Gout, diabetes.”
Negative example:
Outer layer says: “I usually treat my health info as private.”
Next layer in: “I operate a cocaine dealership. Sorry I didn’t warn you that I was also private about my illegal activities.”
Negative example:
Outer layer says: “Is it ok if I take notes on our conversation?”
Next layer in: “Here’s the group chat where I mocked each point you made to 12 people, some of whom know you”
Positive Example:
Outer layer says “Is it ok if I take notes on our conversation? Also, I’d like to share my unfiltered thoughts about it with some colleagues later.”
Next layer in says: “Jake thinks the new emphasis on wood-built buildings won’t last. Seems overconfident.”
------------------------------------------------------------------------------------------------
Passing the test is a function both of what you conveyed (explicitly and implicitly) and the expectations of others. If it’s normal to start mocking group chats, then it doesn’t need to be said to avoid shock and surprise. The illusion of transparency comes to bite here.
Andrew:
Social friction minimization is the default trend that shapes the outer layers of a person or institution, by eroding away the bits of information that might cause offence, leaving layers of more pungent information underneath. The “onion model” of honesty or integrity is that each layer of your personality or institution should hide but not mislead about the layer underneath it. This usually involves each layer sharing something about the kinds of information that are in the next layer in, like “I generally keep my health information private”, so people won’t assume that a lack of info about your health means you’re doing just fine health-wise.
It takes a bit of work to put sign-posts on your outer layer about what kinds of information are inside, and it takes more work to present those sign-posts in a socially smooth way that doesn’t raise unnecessary fears or alarms. However, if you put in that work, you can safely get to know people without them starting to wonder, “What else is this person or institution hiding from me?” And, if everyone puts in that work, society in general becomes more trustworthy and navigable.
I started using the onion model in 2008, and since then, I’ve never told a lie. It’s surprisingly workable once you get the hang of it. Some people think privacy is worse than lies, but I believe the opposite is true, and I think it’s worth putting in the effort to quit lying entirely if you’re up to the challenge. Going a bit further, you can add an outer layer of communications that basically tells people what kinds of things you’re keeping private, so not only have you not lied, you’ve also avoided misleading them. That’s the whole onion model.
Chana:
I have found this model extremely useful in the last few months talking about organizational strategy as a way of carving between “not everyone gets to know everything” and “actively pointing people in the wrong direction about what’s true lacks integrity” and avoiding “I didn’t lie but I knowingly misled.”
So far I have thought about it as a backwards reflecting device—what on the inside would people be shocked to find out, and how can I make sure they are not shocked, rather than forward thinking and signposting all the things I might want to signpost, but I could imagine that changing. (ie right now I’m taking this as a useful quick tool, rather than a full orientation to honesty as Andrew does, but that could definitely change).
In general, over the last few years I have shifted pretty far towards “transparency, honesty, earnestness are extremely powerful and fix a lot of things that can otherwise go wrong.”
On a different note, for me, virtue ethics is attractive, but not real, and tests for integrity are important and useful pointers at things that frequently go wrong and can go better, rather than referenda on your soul. I would guess there are situations in which glomarizing is insufficient, and focusing too much on integrity will reveal the existence of secrets you have no interest in revealing, at least if you are not massively skilled at it.
[Some small edits made, including to the title, for clarification purposes]
- Voting Results for the 2022 Review by 2 Feb 2024 20:34 UTC; 57 points) (
- On Loyalty by 20 Feb 2023 10:29 UTC; 56 points) (EA Forum;
- EA & LW Forums Weekly Summary (26 Sep − 9 Oct 22′) by 10 Oct 2022 23:58 UTC; 24 points) (EA Forum;
- EA & LW Forums Weekly Summary (26 Sep − 9 Oct 22′) by 10 Oct 2022 23:58 UTC; 13 points) (
- 29 May 2024 22:18 UTC; 11 points) 's comment on MIRI 2024 Communications Strategy by (
- Five Reasons to Lie by 17 Jan 2023 16:53 UTC; 0 points) (
Figuring out the edge cases about honesty and truth seem important to me, both as a matter of personal aesthetics and as a matter for LessWrong to pay attention to. One of the things people have used to describe what makes LessWrong special is that it’s a community focused on truth-seeking, which makes “what is truth anyway and how do we talk about it” a worthwhile topic of conversation. This article talks about it, in a way that’s clear. (The positive example negative example pattern is a good approach to a topic that can really suffer from illusion of transparency.)
Like Eliezer’s Meta-Honesty post, the approach suggested does rely on some fast verbal footwork, though the footwork need not be as fast as Meta-Honesty. Passing the Onion Test consistently requires the same kind of comparison to alternate worlds as glomarization, which is a bit of a strike against it but that’s hardly unique to the Onion Test.
I don’t know if people still wind up feeling mislead? For instance, I can imagine someone saying “I usually keep my financial state private” and having their conversation partners walk away with wildly different ideas of how they’re doing. Is it so bad they don’t want to talk about it? Is it so good they don’t want to brag? If I thought it was the former and offered to cover their share of dinner repeatedly, I might be annoyed if it turns out to be the latter.
I don’t particularly hold myself to the Onion Test, but it did provide another angle on the subject that I appreciated. Nobody has yet used it this way around me, but I could also see Onion Test declared in a similar manner to Crocker’s Rules, an opt-in social norm that might be recognized by others if it got popular enough. I’m not sure it’s worth the limited conceptual slots a community can have for those, but I wouldn’t feel the slot was wasted if Onion Tests made it that far.
This might be weird, but I really appreciate people having the conversations about what they think is honest and in what ways they think we should be honest out loud on the internet where I can read them. One can’t assume that everyone has read your article on how you use truth and is thus fairly warned, but it is at least a start. Good social thing to do, A+. I don’t know if more people thinking about this means we’d actually find a real consensus solution and it’s probably not actually the priority, but I would like a real consensus solution and at some previous point someone’s going to have to write down the prototype that leads to it.
Ultimately I don’t actually want this in the Best of 2022, not because it isn’t good, but because I’d worry a little about someone reading through the Best Of collections and thinking this was more settled or established than it is. The crux here is that I don’t think it’s settled, established, or widely read enough that people will know what you mean if you declare Onion Test. If I knew everyone on LessWrong would read everything in the Best Of 2022, then I’d change my mind and want this included so as to add the Test to our collective lexicon.
This is an example of a clear textual writeup of a principle of integrity. I think it’s a pretty good principle, and one that I refer to a lot in my own thinking about integrity.
But even if I thought it was importantly flawed, I think integrity is super important, and therefore I really want to reward and support people thinking explicitly about it. That allows us to notice that our notions are flawed, and improve them, and it also allows us to declare to each other what norms we hold ourselves to, instead of sort of typical minding and assuming that our notion of integrity matches others’ notion, and then being shocked when they behave badly on our terms.
I think about this framing quite a lot. Is what I say going to lead to people assuming roughly the thing I think even if I’m not precise. So the concept is pretty valuable to me.
I don’t know if it was the post that did it, but maybe!