“Honesty reduces predictability” seems implausible as a thesis.
Self
OpenAI successfully waging the memetic war, as usual
Awesome!
My faves are #4 Intuition Flooding and #12 Incremental Reading. Will try them when I have slack and a topic of interest.
#2 Immersive Reading seems intriguing. I’ve noticed in myself a sense of my reading speed being capped by mental critical filtering processes. I feel like I could increase my comprehension speed at the cost of absorbing contents less discriminately.
#3 Recursive Sampling and #7 Spot the Core are strategies I’ve discovered myself, but no less useful for that.
#8. Triangulating Genius seems effortful but like a great fit for particular cases. #9 Expert Observation is great where the material exists. (Someone should do youtube videos liveblogging their math learning or social situation navigation, for me)
Amusing instructive and unfortunate this post’s actual meaning got lost in politics. IMO it’s one of the better ones.
Am left wondering if “local” here has a technical meaning or is used as a vague pointer.
What people need to get is that Lying is the weaker subset of Deception. It’s the type you can easily call out and retaliate against.
Which is why we evolved to have strong instinctive reactions to it.
I take away:
While doubt may involve encountering disconfirming evidence for a held belief—and it’s proper to immediately update on the doubt-creating evidence and thereby factor the expected result of further inquiry into your belief-state, -
Doubt itself is a pointer to a location of yet-unseen evidence. To a specific line of inquiry that may or may not disconfirm the held belief in question.
The inverse, or perhaps a generalization to positive and negative cases, is then Suspicion.
Suspicion points to locations of likely belief-creating or belief-modifying evidence.
Edit: meditating on what this post points to—finding in myself instances of the sensation of rational-doubt, and dwelling on them—proved useful.
I find it important for rationalists to think and talk more about deception.
While in honesty the post is a bit long for my taste, I like the way it approaches the overton window with this kind of dark-artsy, borderline-political topic and presents a plainly-insightful case study.
I’d say Accidentally Load Bearing structures are (statistically speaking) always the work of another optimizer: - someone saw the structure, and built another (architectural, behavioral,) structure on top of it.
So the key question is whether or not this structure may at some point have seemed useful to someone. (In a way that can be retrospectively broken.)
I think the post loses out on mental succinctness not explaining this.
Thanks for starting this rebellion, Eliezer.
Splitting the Great Idea into parts
Applied to “The Sequences”, or Rationality:
a collection of good predictive models
a foundation for a culture more productive and virtuous than mainstream culture
Treating every additional detail as burdensome
It helps to apply scepticism to every post, and internally rank posts by usefulness and credence.
(I’ve since found https://www.lesswrong.com/rationality, which does the job.)
Same.
The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.
It’s so unfortunate that “how to think”—the rules of proper belief—are not hardcoded in the system’s firmware, and must instead be entered via user-supplied data the belief system is built to manage. I’d frame that this post is centrally about this user-caused systembehavior-variability, and the implicit security flaw.
Another aspect: Dominant memes—that is, memes that feel good, fair & highstatus—can be functionally dysvirtuous and unilaterally damaging.
Very cool. Less of a distinct mental handle, more of a subtle mental strategy one can find oneself executing across time.
This cognitive phenomenon is usually lumped in with “confirmation bias.” However, it seems to me that the phenomenon of trying to test positive rather than negative examples, ought to be distinguished from the phenomenon of trying to preserve the belief you started with. “Positive bias” is sometimes used as a synonym for “confirmation bias,” and fits this particular flaw much better.
Subtle distinction I almost missed here. Worth expanding.
I think this page would be more useful if it linked to the individual sequences it lists.
As far as I’ve seen, there is no page that links to all sequences in order, which would be useful for working through them systematically.
This works on a number of levels, although perhaps the most obvious is the divide between styles of thought on the order of “visual thinker”, “verbal thinker”, etc. People who differ here have to constantly reinterpret everything they say to one another, moving from non-native mode to native mode and back with every bit of data exchanged.
Have you written more about those different styles somewhere?
And this is how talking is anchrored in Costly Signaling.
(Note that “I dunno, probably around 9 pm.” is still an assurance, though of a different kind: You’re assuring that 9 pm is an honest estimate. If it turns out you make such statements up at random, it will cost you.)
And that’s why talking can convey information at all.
TL;DR It often takes me a bit to grasp what you’re pointing to.
Not because you’re using concepts I don’t know but because of some kind of translation friction cost. Writing/reading as an ontological handshake.
For example:
>How does task initiation happen at all, given the existence of multiple different possible acts you could take? What tips the mind in the direction of one over another?
The question maps obviously enough to my understandings, in one way or another*, but without contextual cues, decoding the words took me seconds and marginally-conscious searching.
* I basically took it as “How do decisions work?”. Though, given the graphic, it looks like you’re implying a kind of privileged passive state before a “decision”/initiation happens, but that part of the model is basically lost on me because its exact shape is within a meaning searchspace with too many remaining degrees of freedom.
>There are four things people confuse all the time, and use the same sort of language to express, despite them meaning very different things:
I think my brain felt a bit of “uncertainty what to do with the rest of the sentence”, in a “is there useful info in there” sense, after the first 9 words. I think the first 9 words sufficed for me, they (with context below) contained 85% of the meaning I took away.
>Whether you’re journaling, Internal Double Cruxing, doing Narrative Therapy, or exploring Internal Family Systems, there’s something uniquely powerful about letting your thoughts finish.
Strikes me as perhaps a plain lack of Minto (present your conclusion/summary first, explanations/examples/defenses/nuances second, for that’s how brains parse info). For the first half of the sentence my brain is made to store blank data, waiting for connections that will turn them into info.
Also reminded of parts of this, which imo generalizes way beyond documentations.
Dunno if this is even useful, but it’d be cool if you had some easy to fix bottlenecks.
Downvoters: consider “Deception increases predictability”