If you’re thinking without writing, you only think you’re thinking.
-Leslie Lamport
This seems..… straightforwardly false. People think in various different modalities. Translating that modality into words is not always trivial. Even if by “writing”, Lamport means any form of recording thoughts, this still seems false. Often times, an idea incubates in my head for months before I find a good way to represent it as words or math or pictures or anything else.
Also, writing and thinking are separate (albiet closely related) skills, especially when you take “writing” to mean writing for an audience, so the thesis of this Paul Graham post is also false. I’ve been thinking reasonably well for about 16 years, and only recently have I started gaining much of an ability to write.
Are Lamport and Graham just wordcels making a typical mind fallacy, or is there more to this that I’m not seeing? What’s the steelman of this claim that good thinking == good writing?
Convex agents are practically invisible.
We currently live in a world full of double-or-nothing gambles on resources. Bet it all on black. Invest it all in risky options. Go on a space mission with a 99% chance of death, but a 1% chance of reaching Jupiter, which has about 300 times the mass-energy of earth, and none of those pesky humans that keep trying to eat your resources. Challenge one such pesky human to a duel.
Make these bets over and over again and your chance of total failure (i.e. death) approaches 100%. When convex agents appear in real life, they do this, and very quickly die. For these agents, that is all part of the plan. Their death is worth it for a fraction of a percent chance of getting a ton of resources.
But we, as concave agents, don’t really care. We might as well be in completely logically disconnected worlds. Convex agents feel the same about us, since most of their utility is concentrated on those tiny-probability worlds where a bunch of their bets pay off in a row (for most value functions, that means we die). And they feel even more strongly about each other.
This serves as a selection argument for why agents we see in real life (including ourselves) tend to be concave (with some notable exceptions). The convex ones take a bunch of double-or-nothing bets in a row, and, in almost all worlds, eventually land on “nothing”.