Keeps baffling me how much easier having a concept for something makes thinking about it.
Self
What about this one:
“Hivemind” is best characterized as a state of zero adversarial behavior.
“Humanity becomes a hivemind” is the single least dystopic coherent image of the future.
Illustrative post. The downvotes confuse me.
Depression is a formidable cognitive specialization.
There may have been other, unmentioned optimization targets that also need eloquence
Predictions:
(75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
(66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.
The whole gestalt of why this is a huge affordance seems self-evident to me, it’s a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.
But one intuition is: Regular “natural” human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.
- ^
i.e. build mutual understandings at least to, but ideally surpassing, the point of family-like intimacy / feeling the others as extensions of oneself
I’m not eloquent enough to express how important I think this is.
I feel like such intuitions could be developed. - I’m more uncertain where I would use this skill.
Though given how OOD it is there could be significant alpha up for grabs
(Q: Where would X-Ray vision for cluster structures in 5-dimensional space be extraordinarily useful?)
Hmm. Yeah. It gets difficult to display points with the same XY coordinates and different RGB coordinates
With colors you can in principle display data in 5-dimensional space on a 2D medium without flattening.
Bottlenecks (cognitive):
- intuitively knowing the RGB values of colors you’re seeing
- intuitively perceiving color differences as 3-dimensional distancesFeasible? Useful?
Latest in Shit Claude Says:
Credibility Enhancing Displays (CREDs)
Ideas spread not through their inherent quality but through costly displays of commitment by believers. Words are cheap; actions that would be irrational if the belief were false are persuasive.Predictive angle: The spread of beliefs correlates more strongly with observable sacrifices made by believers than with evidence or argument quality.
Novel implication: Rationalists often fail to spread ideas despite strong arguments because they don’t engage in sufficient credibility enhancing displays. Effective belief transmission requires demonstration through personal cost[1].
The easiest way for rats to do this more may be “retain nonchalant confidence when talking about things you’re certain are true, even in the face of audience skepticism”
- ^
I think the “personal cost” angle is mistaken. Costly Signaling only requires the act would be costly if you didn’t posses the trait.
- ^
Aspies certainly seem to do this less!
You mean, like him as a blogger? Or as a person in real life?
The latter? Like, I subconsciously parse his blogging voice not unlike as if it were a person in my tribal surroundings, and I like/admire/relate to that virtual person, and I think this is what causes some aspect of persuasion
I mean yes it’s embarrassing, but it’s what I see in myself and what seems to be most consistent with what everyone else is doing, certainly more consistent than what they claim they’re doing.
E.g. it seems rare for someone who actively dis-appreciates the sequences to not also dislike Eliezer for what seems like vibes-based reasons more than content-based reasons
But then again, all models are false!
If I peer into my own past, where arguably I was more autistic than today, I can see that my standards for admiration seem to have been much stricter. I basically wouldn’t ever copy role models because there were no role models to copy. This may be the shape of an important caveat
They do, but the explanation proposed here matches everything I know most exactly and simply.
E.g. it became immediately clear that the sequences wouldn’t work nearly as well for me if I didn’t like Eliezer.
Or the way fashion models are of course not selected for attractiveness but for more mimetic-copying-inducing highstatus traits like height/confidence/presence/authenticity
and others
And yeah not all of the Claude examples are good, I hadn’t cherrypicked
More thoughts that may or may not be directly relevant
What’s missing from my definition is that deception happens solely via “stepping in front of the camera”, i.e. via the regular sensory channels of the deceived optimizer, ie brainwashing or directly modifying memory is not deception
From this follows to deceive is to either cause a false pattern recognition or to prevent a correct one, and for this you indeed need familiarity with the victim’s perceptual categories
I’d like to say more re: hostile telepaths or other deception frameworks but am unsure what your working models are
I’d say weirdness is about not being predictable
Perhaps along some generalized conformity axis—being perceived as a potential risk to the social order.
Deception: An optimizer falsifies another optimizer’s models in order to steer its behavior
Had a minor braincoom discovering Mimetic Theory
Best model/compression I took away is a mental image evoked by “Desire is triangular, not linear” depicting how desires are created via copying
Claude 3.7 explains some basics:
Desire is triangular, not linear—We don’t want things directly; we want what others want. Every desire has a hidden “model” we’re unconsciously imitating.
Conversion happens through the model—We convert to a new worldview by imitating someone we admire, not through intellectual persuasion. Reason follows mimetic conversion.
The interdividual self—Girard rejects the autonomous individual entirely. The “self” is actually a collection of desires borrowed from others. What we call “personality” is just the unique pattern of our imitations.
Common Examples:
Kids fighting over the same toy while ignoring identical ones
Fashion trends spreading through social groups
Career paths chosen because respected peers chose them
Romantic triangles where someone becomes attractive once they’re dating someone else
Consumer frenzies (iPhones, limited editions) driven by visible queues and scarcity
Gentrification patterns where neighborhoods become desirable because the “right people” moved there
Academic research clusters forming around suddenly “hot” topics
Subtler Manifestations:
The desire for “authenticity” itself (ironic since it’s mimetically transmitted)
Self-improvement goals based on what’s celebrated in your social circle
Political opinions adopted from respected figures in your group
Food preferences that align with your aspirational identity group
Hobbies pursued because they signal belonging to certain communities
Creative outputs that unconsciously mirror admired creators
Parenting styles that copy other parents you respect
Given the above, will antiandrogens make me more introverted? And if so, are there cognitive benefits to introversion? (I think so)
2 days ago started taking the supposed mild but statistically significant antiandrogens and OTC supplements Reishi + Chasteberry + Spearmint
I’ll be amused if that before long ends my “frequent public posting” streak
(Vague musing)
There’s a type of theory I’d call a “Highlevel Index” into an information body, for example, Predictive Processing is a highlevel index for Neurology, or Natural Selection is a highlevel index for Psychology, or Game Theory and Signaling Theory are highlevel indexes for all kinds of things.
They’re tools for delving into information bodies. They give you good taste for lower level theories, a better feel for what pieces of knowledge are and aren’t predictive. If you’re like me, and you’re trying to study Law or Material Science, but you got no highlevel indexes for these domains, you’re left standing there, lost, without evaluability, in front of a vast sea of lower-level more detailed knowledge. You could probs make iterative bottom up progress by layer for layer absorbing detail-info and synthesizing or discovering higher-level theories from what you’ve seen, but that’s an unknown and unknowable-feeling amount of work. Standing at the foot of the mountain, you’re not feeling it. There’s no affordance waiting to be grasped.
One correct framing here is that I’m whining because not all learning is easy.
But also: I do believe the solutionspace ceilng here is much higher than we notice, and that marginal exploration is worth some opportunity cost.
So!
Besides what’s common knowledge in rat culture, what are your fave highlevel indexes?
What non-redundant authors besides Eliezer & co talk a lot in highlevel indexes?
Are there established or better verbal pointers to highlevel indexes?
Improved my intuitions, ty.