Privacy and writing
Epistemic status: N=1
I’ve always written several thousand words a day in a private Google doc about anything that came to mind. Only recently have I started publishing to LessWrong. It’s a long and arduous process for me, too slow to be worth the effort usually. [1] Still, publishing on LW is probably a net good overall.
It also leads to interesting new failure modes. Here are a few.
Emotional security
Orwell’s 1984:
It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. A nervous tic, an unconscious look of anxiety, a habit of muttering to yourself—anything that carried with it the suggestion of abnormality, of having something to hide.
Ingsoc has managed to be so invasive and precise in their button-pushing that citizens must monitor their own thoughts. They’re making them systematically suppress their own subagents. The state’s definition of “public” has gained so much ground over “private” that there’s barely anything left. Orwell is describing the ideal of totalitarianism.
Holly Elmore writes about privacy[2] (emphasis mine):
For many years, I thought privacy was a fake virtue and only valuable for self-defense. [...] I thought privacy was an important right, but that the ideal was not to need it.
I’m coming back around to privacy for a few reasons, first of which was my several year experiment with radical transparency. For a lot of that time, it seemed to be working. Secrets didn’t pile up and incubate shame, and white lies were no longer at my fingertips. I felt less embarrassed and ashamed over the kind of things everyone has but no one talks about. Not all of it was unhealthy sharing, but I knew I frequently met the definition of oversharing– I just didn’t understand what was wrong with that. [...]
I now believe that, because I scrupulously reported almost anything to anyone who asked (or didn’t ask), I conveniently stopped being aware of a lot of my most personal and tender feelings. [...]
I now think privacy is important for maximizing self-awareness and self-transparency. The primary function of privacy is not to hide things society finds unacceptable, but to create an environment in which your own mind feels safe to tell you things. If you’re not allowing these unshareworthy thoughts and feelings a space to come out, they still affect your feelings and behavior– you just don’t know how or why. And all the while your conscious self-image is growing more alienated from the processes that actually drive you. Privacy creates the necessary conditions for self-honesty, which is a necessary prerequisite to honesty with anyone else. When you only know a cleaned-up version of yourself, you’ll only be giving others a version of your truth.
This is a more voluntary kind of subagent-suppression than what’s going on in 1984, and is motivated by signaling rather than survival. That doesn’t make it much less disastrous, as far as quality of thinking is concerned. In both cases, you are forcing filters closer and closer to the source of thoughts and suffering for it.
Intellectual output is heavily mediated by your sense of emotional security, and these demonstrate failure-modes from having a poor sense of emotional security.
Intellectual security
Elizabeth writes:
Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation. It’s like showing a beautiful, fragile butterfly to your friend to demonstrate the power of flight, only to have them grab it and crush it in their hands, then point to the mangled corpse as proof butterflies not only don’t fly, but can’t fly, look how busted their wings are.
I’m writing this on a personal Google Doc (instead of, say, the LW editor), which helps me feel free to go on tangents. But despite this, I know I’m going to end up publishing this, and can’t help but writing like it’s the final draft. LessWrong is probably the most cutting-edge butterfly-crushing capabilities lab in the world. It’s scary.
I’m applying LW-grade intellectual rigor to an exploratory draft. This restricts creativity, makes work less fun overall, and isn’t even fair. Were the best works written without intellectual security? No. They all used to be a loose collection of butterfly ideas, just like this one. It’s only after benefiting from a load of intellectual slack (eg writing in private) that they get to their present ironclad status.
Freedom of goal
Tvsi writes:
It’s just that there’s a danger in having fun with math because it helps you learn it more deeply, rather than because it’s fun. Talking about how it helps you learn it more deeply is supposed to be a signpost, not always the main active justification. A signpost is a signal that speaks to you when you’re in a certain mood, and tells you how and why to let yourself move into other moods. A signpost is tailored to the mood it’s speaking to, so it speaks in the language of the mood that it’s pointing away from, not in the language of the mood it’s pointing the way towards. If you’re in the mood of justifying everything in terms of how it helps decrease existential risk, then the justification “having fun with math helps you learn it better” might be compelling. But the result isn’t supposed to be “try really hard to do what someone having fun would do” or “try really hard to satisfy the requirement of having fun, in order to decrease X-risk”, it’s supposed to be actually having fun. Actually having fun is a different mood from justifying everything in terms of X-risk. Imagine a six-year-old having fun; it’s not because of X-risk.”
I’m writing this in expectation that it’ll be useful to someone on LessWrong. That’s important to me for the express purpose of contributing to x-risk mitigation, even if it’s in a small and indirect way. This means I’m restricting my thoughtspace to “things that seem useful”, which blocks me from accessing that vast space called “things that don’t seem useful, but are in fact useful”. (See Paul Graham noticing his confusion at this.)
When writing on LW, I don’t feel like I have freedom of goal. This is my fault, not the platform’s; I know all sorts of posts Tsvi would call “fun” are appreciated here. Nonetheless, I only feel comfortable exploring nothing in particular, for no other reason than curiosity, when I’m in private. If I did it on LW, I’d feel like I was squishing flowers and making the website overall worse.[3]
Practical takeaways
Anytime I formulate a sentence or footnote I like but that isn’t on topic, I copy and paste it and put it in another doc which serves as my negentropy reservoir for unfinished tangents. This allows me to not kill my darlings (squishing butterflies) while getting the post out the door within a short timeframe.
When I want to publish an idea, I explicitly label it in my doc as “to publish”. Doing this, I trade freedom of movement for focus.[4] And when I don’t want to publish, I sometimes try being deliberately messy in my private docs by eg skipping inferential steps or writing run-on sentences and asymmetric footnotes. I think “deliberately messy” is key here, because if I start controlling for quality while I’m writing, I end up slippery-sloping into stressing over the placement of every comma. Personal docs are meant to be chaotic; you’d be stifling that garden by tending to it like you’d tend to LessWrong. You should let weeds grow everywhere.
Before publishing, I tend to get my ideas vetted by a group of friends through email. This reaps many of the benefits that come with public writing (like slamming my map against the territory) while dodging many of the emotional and intellectual security concerns LW represents for me. Plus, it’s more fun because I can afford to be more casual. Don’t get the weed-whacker just yet.
- ^
As per James Somers’ seminal post, I’ve dispensed effort in speeding up writing for the express purpose of reducing the average effort/time cost per post published.
- ^
Thanks to Kaj Sotala’s comment for inspiring this post.
- ^
I’m grateful for how well the karma system works. When I write bad posts, they quietly go away and never get read again, and I don’t have to feel guilty for wasting people’s time. So the karma system makes me more likely to publish.
- ^
Would you believe me if I said I’d never realized that phrase was redundant until now?
I knew the On Privacy post by Holly Elmore, in fact, I had copied this paragraph to my Anki deck:
Another entry in my Anki deck is about arguments against “If you have nothing to hide, you have nothing to fear”:
The rules may change: Once the invasive surveillance is in place to enforce rules that you agree with, the ruleset that is being enforced could change in ways that you don’t agree with at all – but then, it is too late to protest the surveillance.
It’s not you who determines if you have something to fear: You may consider yourself law-abidingly white as snow, and it won’t matter a bit. What does matter is whether you set off the red flags in the mostly-automated surveillance or maybe even faulty metrics and after having been investigated, you may have lost everything.
Laws must be broken for society to progress, for in hindsight, it may turn out that the criminals were the ones in the moral right. It is an absolute necessity to be able to break unjust laws for society to progress and question its own values, in order to learn from mistakes and move on as a society.
Privacy is a basic human need: Implying that only the dishonest people have need of any privacy ignores a basic property of the human psyche, and sends a creepy message of strong discomfort.