How should someone behave if they’re within one or two standard deviations of average smarts, and think that the authorities think and act like that?
Hmm… firstly, I hope they do not think and act like that. The world looks to me like most people aren’t acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I’m not sure I believe that it’s of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I’d need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve.
You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie.
This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail.
Initially, to ask such a person to play a longer game, feels like asking them to “keep up the facade” while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they’ll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we’re talking about films I have not seen, from what I’ve heard Schindler’s List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is.
(I will regrettably move on to some other activities for now, construction deadlines are this Monday.)
Hmm… firstly, I hope they do not think and act like that.
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:
1 What determines various authorities’ actions?
2 How should a certain sort of person, with less or different information than you, model the authorities’ actions?
Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren’t skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities’ behavior better and navigate interactions with them relatively freely and safely.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation. For example, if you’re wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.
On this model, the way in which the behavior of people who’ve decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not. Another piece of fiction worth looking at in this context is Burroughs’s Naked Lunch. I’ve only seen the movie version, but I would guess the book covers the same basic content—the disordered and paranoid perspective of someone who has a vague sense that they’re “under cover” vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation.
Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don’t quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority’s power?
I can try to figure it out for myself… I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone’s approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely).
In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers.
...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. (“may not be”! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone’s approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I’m still not sure why people end up getting this so wrong.
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
Re-reading my comment, the thing that jumps to mind is that “I currently know of no alternative path to success”. When I am given the option between “Go all in on this path being a fair path to success” and “I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure”, I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works.
I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay.
I think “fear’ is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.
Hmm… firstly, I hope they do not think and act like that. The world looks to me like most people aren’t acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I’m not sure I believe that it’s of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I’d need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve.
This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail.
Initially, to ask such a person to play a longer game, feels like asking them to “keep up the facade” while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they’ll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we’re talking about films I have not seen, from what I’ve heard Schindler’s List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is.
(I will regrettably move on to some other activities for now, construction deadlines are this Monday.)
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:
1 What determines various authorities’ actions?
2 How should a certain sort of person, with less or different information than you, model the authorities’ actions?
Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren’t skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities’ behavior better and navigate interactions with them relatively freely and safely.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation. For example, if you’re wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.
On this model, the way in which the behavior of people who’ve decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not. Another piece of fiction worth looking at in this context is Burroughs’s Naked Lunch. I’ve only seen the movie version, but I would guess the book covers the same basic content—the disordered and paranoid perspective of someone who has a vague sense that they’re “under cover” vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don’t quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority’s power?
I can try to figure it out for myself… I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone’s approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely).
In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers.
...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. (“may not be”! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone’s approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I’m still not sure why people end up getting this so wrong.
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
Re-reading my comment, the thing that jumps to mind is that “I currently know of no alternative path to success”. When I am given the option between “Go all in on this path being a fair path to success” and “I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure”, I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works.
I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay.
I think “fear’ is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
(I appreciate the reply, I will not get back to this thread until Monday at the earliest. Any ping to reply mid next week is very welcome.)
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.