Why do I not always have conscious access to my inner parts? Why, when speaking with authority figures, might I have a sudden sense of blankness.
Recently I’ve been thinking about this reaction in the frame of ‘legibility’, ala Seeing like a State. State’s would impose organizational structures on societies that were easy to see and control—they made the society more legible—to the actors who ran the state, but these organizational structure were bad for the people in the society.
For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population. [Wikipedia]
I’m toying with applying this concept across the stack.
If you have an existing model of people being made up of parts [Kaj’s articles], I think there’s a similar thing happening. I notice I’m angry but can’t quite tell why or get a conceptual handle on it—if it were fully legible and accessible to conscious mind, then it would be much easier to apply pressure and control that ‘part’, regardless if the control I am exerting is good. So instead, it remains illegible.
A level up, in a small group conversation, I notice I feel missed, like I’m not being heard in fullness, but someone else directly asks me about my model and I draw a blank, like I can’t access this model or share it. If my model were legible, someone else would get more access to it and be able to control it/point out its flaws. That might be good or it might be bad, but if it’s illegible it can’t be “coerced”/”mistaken” by others.
One more level up, I initially went down this track of thinking for a few reasons, one of which was wondering why prediction forecasting systems are so hard to adopt within organizations. Operationalization of terms is difficult and it’s hard to get a precise enough question that everyone can agree on, but it’s very ‘unfun’ to have uncertain terms (people are much more likely to not predict then predict with huge uncertainty). I think the legibility concept comes into play—I am reluctant to put a term out that is part of my model of the world and attach real points/weight to it because now there’s this “legible leverage point” on me.
I hold this pretty loosely, but there’s something here that rings true and is similar to an observation Robin Hanson made around why people seem to trust human decision makers more than hard standards.
This concept of personal legibility seems associated with the concept of bucket errors, in that theoretically sharing a model and acting on the model are distinct actions, except I expect often legibility concerns are highly warranted (things might be out to get you)
I like the idea that having some parts of you protected from yourself makes them indirectly protected from people or memes who have power over you (and want to optimize you for their benefit, not yours). Being irrational is better than being transparently rational when someone is holding a gun at your head. If you could do something, you would be forced to do it (against your interests), so it’s better for you if you can’t.
But, what now? It seems like rationality and introspection is a bit like defusing a bomb—great if you can do it perfectly, but it kills you when you do it halfways.
It reminds me of a fantasy book which had a system of magic where wizards could achieve 4 levels of power. Being known as a 3rd level wizard was a very bad thing, because all 4th level wizards were trying to magically enslave you—to get rid of a potential competitor, and to get a powerful slave (I suppose the magical cost of enslaving someone didn’t grow up proportionally to victim’s level).
To use an analogy, being biologically incapable of reaching 3rd level of magic might be an evolutionary advantage. But at the same time, it would prevent you from reaching the 4th level, ever.
The only difference between their presentation and mine is that I’m saying that for 99% of people, 99% of the time, taking ideas seriously is the wrong strategy
I kinda think this is true, and it’s not clear to me from the outset whether you should “go down the path” of getting access to level 3 magic given the negatives.
Probably good heuristics are proceeding with caution when encountering new/out there ideas, remembering you always have the right to say no, finding trustworthy guides, etc.
Why do I not always have conscious access to my inner parts? Why, when speaking with authority figures, might I have a sudden sense of blankness.
Recently I’ve been thinking about this reaction in the frame of ‘legibility’, ala Seeing like a State. State’s would impose organizational structures on societies that were easy to see and control—they made the society more legible—to the actors who ran the state, but these organizational structure were bad for the people in the society.
For example, census data, standardized weights and measures, and uniform languages make it easier to tax and control the population. [Wikipedia]
I’m toying with applying this concept across the stack.
If you have an existing model of people being made up of parts [Kaj’s articles], I think there’s a similar thing happening. I notice I’m angry but can’t quite tell why or get a conceptual handle on it—if it were fully legible and accessible to conscious mind, then it would be much easier to apply pressure and control that ‘part’, regardless if the control I am exerting is good. So instead, it remains illegible.
A level up, in a small group conversation, I notice I feel missed, like I’m not being heard in fullness, but someone else directly asks me about my model and I draw a blank, like I can’t access this model or share it. If my model were legible, someone else would get more access to it and be able to control it/point out its flaws. That might be good or it might be bad, but if it’s illegible it can’t be “coerced”/”mistaken” by others.
One more level up, I initially went down this track of thinking for a few reasons, one of which was wondering why prediction forecasting systems are so hard to adopt within organizations. Operationalization of terms is difficult and it’s hard to get a precise enough question that everyone can agree on, but it’s very ‘unfun’ to have uncertain terms (people are much more likely to not predict then predict with huge uncertainty). I think the legibility concept comes into play—I am reluctant to put a term out that is part of my model of the world and attach real points/weight to it because now there’s this “legible leverage point” on me.
I hold this pretty loosely, but there’s something here that rings true and is similar to an observation Robin Hanson made around why people seem to trust human decision makers more than hard standards.
This concept of personal legibility seems associated with the concept of bucket errors, in that theoretically sharing a model and acting on the model are distinct actions, except I expect often legibility concerns are highly warranted (things might be out to get you)
Related: Reason as memetic immune disorder
I like the idea that having some parts of you protected from yourself makes them indirectly protected from people or memes who have power over you (and want to optimize you for their benefit, not yours). Being irrational is better than being transparently rational when someone is holding a gun at your head. If you could do something, you would be forced to do it (against your interests), so it’s better for you if you can’t.
But, what now? It seems like rationality and introspection is a bit like defusing a bomb—great if you can do it perfectly, but it kills you when you do it halfways.
It reminds me of a fantasy book which had a system of magic where wizards could achieve 4 levels of power. Being known as a 3rd level wizard was a very bad thing, because all 4th level wizards were trying to magically enslave you—to get rid of a potential competitor, and to get a powerful slave (I suppose the magical cost of enslaving someone didn’t grow up proportionally to victim’s level).
To use an analogy, being biologically incapable of reaching 3rd level of magic might be an evolutionary advantage. But at the same time, it would prevent you from reaching the 4th level, ever.
Thanks for including that link—seems right, and reminded me of Scott’s old post Epistemic Learned Helplessness
I kinda think this is true, and it’s not clear to me from the outset whether you should “go down the path” of getting access to level 3 magic given the negatives.
Probably good heuristics are proceeding with caution when encountering new/out there ideas, remembering you always have the right to say no, finding trustworthy guides, etc.