I’m not sure what exactly my answer is, but it’s a good question, so here’s a babble of pointers what I think ‘crazy’ means, in case that helps someone else figure out a useful definition:
Take actions that most people can confidently know at the time that I will later on not endorse (e.g. physically assault my good friend for fun, set my house on fire, pick up a heroin habit, murder a stranger) or that I wouldn’t endorse if you just gave me a bit more basic social security like money, friends, family, etc (such as murdering someone on the street for some food/money, spending days preparing lies to tell someone in order to trick them into giving me resources, hunt and follow a person until they’re alone and then try to get them to give me stuff, stalk someone because I think they’ve fallen in love with me, etc).
Believe things that most people can confidently know that I don’t have the evidence for and will later on not believe (e.g. demons are talking to me, I am literally Napoleon, that I have psychic powers and can read anyone’s mind at any time).
When someone does or believes things that I (Ben) cannot empathize with or understand why they’d do it unless they didn’t really have much relationship between their words/actions and reality (e.g. constantly tell stories that are obviously lies or that aren’t internally coherent)
The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.
The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.
The third would also seem to include ordinary religious people.
I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.
My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.
If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors—who observably often fail, end up much worse off, and express regret afterwards—“why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.
This still leaves me confused about why these people made such terrible mistakes. Many people can look at their society and realize how it is cognitively distorting and tricking them into evil behavior. It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.[1] I think there are more modest proposals, like seasteading or building internet communities or legalizing prediction markets, that have a strong shot of fixing a chunk of the insanity of your civilization without leaving you entirely out in the wilderness, having to rederive everything for yourself and leading you to shooting yourself in the foot quite so quickly.
I expect all North Korean defectors will get labeled evil and psychotic by the state. Like a sheeple, I don’t think all such ones will be labeled this way by everyone in my personal society, though I straightforwardly acknowledge that a substantial fraction will. I think there were other options here that were less… wantonly dysfunctional.
Or stealing billions of dollars from people. But to honestly be, that one seems the least ‘crazy’ to me, it doesn’t seem that hard for me to explain how someone could trick themselves into thinking that they should personally have all of the resources. I’ll say I’m not sure at all that these three things do form a natural category, though I still think it’s interesting to ask “Supposing they do, what is the key commonality?”
I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone’s normal persona. You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets—and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.
A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal—so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity. So it’s no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.
For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter’s movie They Live (1988), which tells the story of a vagrant construction worker who finds an enchanted pair of sunglasses that translate advertisements into inaccurate summaries of the commands embedded in them, and make some people look like creepy aliens. So without any apparent explanation, provocation, or warning, he starts shooting “aliens” on the street and in places of business like grocery stores and banks, and eventually blows up a TV transmission station to stop the evil aliens from broadcasting their mind-control waves. The movie is from his perspective and unambiguously casts him as the hero. More recently, the climax of The Matrix (1999), a movie about a hacker waking up to systems of malevolent authoritarian control under which he lives, strikingly resembles the Columbine massacre (1999), which actually happened. See also Fight Club (1999). Office Space (1999) provides a more optimistic take: A wizard casts a magic spell on the protagonist to relax his body, which causes him to become unresponsive to the social threats he was previously controlled by. This causes his employer to perceive him as too powerful for his assigned level in the pecking order, and he is promoted to rectify the situation. He learns his friends are going to be laid off, is indignant at the unfairness of this, and gets his friends together to try to steal a lot of money from their employer. This doesn’t go very well, and he eventually decides to trade down to a lower social class instead and join a friend’s construction crew, while his friends remain controlled by social threat.
I’ve noticed that on phone calls with people serving as members of a big bureaucratic organization like a bank or hospital, I can’t get them to do anything by appealing to policies they’re officially required to follow, but talking like I expect them to be afraid of displeasing me sometimes makes things happen. On the positive side, they also seem more compliant if they hear my baby babbling in the background, possibly because it switches them into a state of “here is another human who might have real constraints and want good things, and therefore I sympathize with them”—which implies that their normal on-call state is something quite different.
I’m not sure whether you were intentionally alluding to cops and psychiatrists here, but lots of people effectively experience them as having something like this attitude:
It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.
How should someone behave if they’re within one or two standard deviations of average smarts, and think that the authorities think and act like that? I think that’s a legit question and one I’ve done a lot of thinking about, since as someone who’s better-oriented in some ways, I want to be able to advise such people well. You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie. If it goes well, you have written a fan fic of significant social value. If it goes poorly, you understand why people don’t do that.
Two years ago, I took a high dose of psychedelic mushrooms and was able to notice the sort of immanent-threat model I described above in myself. It felt as though there was an implied threat to cast me out alone in the cold if I didn’t channel all my interactions with others through an “adult” persona. Since I was in a relatively safe quiet environment with friends in the next room, I was able to notice that this didn’t seem mechanistically plausible, and call the bluff of the internalized threat: I walked into the next room, asked my friends for cuddles, and talked through some of my confusion about the extent to which my social interface with others justified the expense of maintaining an episodic memory. But this took a significant amount of courage and temporarily compromised my balance—my ability to stand up or even feel good sitting on a couch elevated above the ground. Likely most people don’t have the kinds of friends, courage, patience, rational skepticism, theoretical grounding in computer science, evolution, and decision theory, or living situation for that sort of refactoring to go well.
How should someone behave if they’re within one or two standard deviations of average smarts, and think that the authorities think and act like that?
Hmm… firstly, I hope they do not think and act like that. The world looks to me like most people aren’t acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I’m not sure I believe that it’s of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I’d need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve.
You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie.
This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail.
Initially, to ask such a person to play a longer game, feels like asking them to “keep up the facade” while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they’ll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we’re talking about films I have not seen, from what I’ve heard Schindler’s List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is.
(I will regrettably move on to some other activities for now, construction deadlines are this Monday.)
Hmm… firstly, I hope they do not think and act like that.
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:
1 What determines various authorities’ actions?
2 How should a certain sort of person, with less or different information than you, model the authorities’ actions?
Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren’t skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities’ behavior better and navigate interactions with them relatively freely and safely.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation. For example, if you’re wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.
On this model, the way in which the behavior of people who’ve decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not. Another piece of fiction worth looking at in this context is Burroughs’s Naked Lunch. I’ve only seen the movie version, but I would guess the book covers the same basic content—the disordered and paranoid perspective of someone who has a vague sense that they’re “under cover” vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation.
Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don’t quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority’s power?
I can try to figure it out for myself… I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone’s approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely).
In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers.
...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. (“may not be”! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone’s approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I’m still not sure why people end up getting this so wrong.
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
Re-reading my comment, the thing that jumps to mind is that “I currently know of no alternative path to success”. When I am given the option between “Go all in on this path being a fair path to success” and “I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure”, I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works.
I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay.
I think “fear’ is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.
You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets—and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.
I would say that it requires an advanced understanding of economics, incentives, and how society works, rather than trust in people. Understanding how a mechanism work reduces the requirement for trust. (They are complements in my mind.)
I think one of the reasons it would be hard to get a recently jailbroken not-that-intellectual person on-board with such a plan is that it would involve giving them novel understanding of how the world works that they do not have, which somehow people are rarely able to intentionally do, and it can easily fall back to an ask of “trust” that you know something the other person doesn’t, rather than a successful communication of understanding. And then after some number of weeks or months or years the world will introduce enough unpredictable noise that the trust will run out and the person will go back to using the world as they understand it, where they were never going to invent a concept like prediction markets.
...but hey, perhaps I’m not giving them enough credit, and actually they would ask themselves questions like “where does all of the cool technology and inventions around me come from” and start building up a model of science and of successful groups and start figuring out which sorts of reasoning actually work and what sorts of structures in society get good things done on purpose and then start to notice which parts of society can give you more of those powers and then start to see notice things like markets and personal freedoms and building mechanistic world models and more as ways to build up those forces in society.
On the one hand this path can take decades and most humans do not go down it. On the other hand the evidence required to build up a functional worldview is increasingly visible as technological progress has sped up over the centuries and so much of the world is viewable at home on a computer screen. Still, teaching anyone anything on purpose is hard in full generality, for some reason, and just as someone is having a crisis-of-faith is a hard time to have to bet on doing it successfully.
(Aside: This gives a more specific motive to explaining how the world works to a wider audience. “I don’t just think it’s generically nice for everyone to understand the world they live in, but I specifically am hoping that the next person to finally see the ways their society enacts evil doesn’t snap and themself do something stupid and evil, but is instead able to wield the true forces of the world to improve it.”)
I do like your definition of “crazy” that uses “an idea [I / the crazy person] would not endorse later.” I think it dissolves a lot of the eeriness around the word that makes it kind of overly heavy-hitting when used, but also, I think that if you dissolve it in this way, it pretty much incentivizes dropping the word entirely (which I think is a good thing, but maybe not everyone would).
If we define it to mean ideas (not the person) that the person holding them would eventually drop or update to something else, that’s more like what the definition of “wrong” is, and which would apply to literally everyone at different points in their lives and to varying degrees at any time. But then maybe this is too wide, and doesn’t capture the meaning of the word implied in the OP’s question, namely, “why do more people than usual go crazy within EA / Rationality?” Perhaps what is meant by the word in this context is when some people seem to hold wrong ideas that are persistent or cannot be updated later at all. For the record, I am skeptical that this form of “crazy” is really all that prevalent when defined this way.
If we define it as “wrong ideas” (things which won’t be endorsed later) then it does offer a rather simple answer to the OP’s question: EA / Rationality is rather ambitious about testing out new beliefs at the forefront of society, so they will by definition hold beliefs that aren’t held by the majority of people, and which by design, are ambitious and varied enough to be expected to be proven wrong many times over time.
If being ambitious about having new or unusual ideas carries with it accepted risks of being wrong more often than usual, then perhaps a certain level of craziness has to be tolerated as well.
I’m not sure what exactly my answer is, but it’s a good question, so here’s a babble of pointers what I think ‘crazy’ means, in case that helps someone else figure out a useful definition:
Take actions that most people can confidently know at the time that I will later on not endorse (e.g. physically assault my good friend for fun, set my house on fire, pick up a heroin habit, murder a stranger) or that I wouldn’t endorse if you just gave me a bit more basic social security like money, friends, family, etc (such as murdering someone on the street for some food/money, spending days preparing lies to tell someone in order to trick them into giving me resources, hunt and follow a person until they’re alone and then try to get them to give me stuff, stalk someone because I think they’ve fallen in love with me, etc).
Believe things that most people can confidently know that I don’t have the evidence for and will later on not believe (e.g. demons are talking to me, I am literally Napoleon, that I have psychic powers and can read anyone’s mind at any time).
When someone does or believes things that I (Ben) cannot empathize with or understand why they’d do it unless they didn’t really have much relationship between their words/actions and reality (e.g. constantly tell stories that are obviously lies or that aren’t internally coherent)
The first one seems like it would describe most people, e.g. many, many people repeatedly drink enough alcohol to predictably acutely regret it later.
The second would seem to exclude incurable cases, and I don’t see how to repair that defect without including ordinary religious people.
The third would also seem to include ordinary religious people.
I think these problems are also problems with the OP’s frame. If taken literally, the OP is asking about a currently ubiquitous or at least very common aspect of the human condition, while assuming that it is rare, intersubjectively verified by most, and pathological.
My steelman of the OP’s concern would be something like “why do people sometimes suddenly, maladaptively, and incoherently deviate from the norm?”, and I think a good answer would take into account ways in which the norm is already maladaptive and incoherent, such that people might legitimately be sufficiently desperate to accept that sort of deviance as better for them in expectation than whatever else was happening, instead of starting from the assumption that the deviance itself is a mistake.
If it’s hard to see how apparently maladaptive deviance might not be a mistake, consider a North Korean Communist asking about attempted defectors—who observably often fail, end up much worse off, and express regret afterwards—“why do our people sometimes turn crazy?”. From our perspective out here it’s easy to see what the people asking this question are missing.
This still leaves me confused about why these people made such terrible mistakes. Many people can look at their society and realize how it is cognitively distorting and tricking them into evil behavior. It seems aggressively dumb to then decide that personally murdering people you think are evil is straightforwardly fine and a good strategy, or that you have psychic powers and should lock people in rooms.[1] I think there are more modest proposals, like seasteading or building internet communities or legalizing prediction markets, that have a strong shot of fixing a chunk of the insanity of your civilization without leaving you entirely out in the wilderness, having to rederive everything for yourself and leading you to shooting yourself in the foot quite so quickly.
I expect all North Korean defectors will get labeled evil and psychotic by the state. Like a sheeple, I don’t think all such ones will be labeled this way by everyone in my personal society, though I straightforwardly acknowledge that a substantial fraction will. I think there were other options here that were less… wantonly dysfunctional.
Or stealing billions of dollars from people. But to honestly be, that one seems the least ‘crazy’ to me, it doesn’t seem that hard for me to explain how someone could trick themselves into thinking that they should personally have all of the resources. I’ll say I’m not sure at all that these three things do form a natural category, though I still think it’s interesting to ask “Supposing they do, what is the key commonality?”
I think part of what happens in these events is that they reveal how much disorganized or paranoid thought went into someone’s normal persona. You need to have a lot of trust in the people around you to end up with a plan like seasteading or prediction markets—and I notice that those ideas have been around for a long time without visibly generating a much saner & lower-conflict society, so it does not seem like that level of trust is justified.
A lot of people seem to navigate life as though constantly under acute threat and surveillance (without a clear causal theory of how the threat and surveillance are paid for), expecting to be acutely punished the moment they fail to pass as normal—so things they report believing are experienced as part of the act, not the base reality informing their true sense of threat and opportunity. So it’s no wonder that if such people get suddenly jailbroken without adequate guidance or space for reflection, they might behave like a cornered animal and suddenly turn on their captors seemingly at random.
For a compelling depiction of how this might feel from the inside, I strongly recommend John Carpenter’s movie They Live (1988), which tells the story of a vagrant construction worker who finds an enchanted pair of sunglasses that translate advertisements into inaccurate summaries of the commands embedded in them, and make some people look like creepy aliens. So without any apparent explanation, provocation, or warning, he starts shooting “aliens” on the street and in places of business like grocery stores and banks, and eventually blows up a TV transmission station to stop the evil aliens from broadcasting their mind-control waves. The movie is from his perspective and unambiguously casts him as the hero. More recently, the climax of The Matrix (1999), a movie about a hacker waking up to systems of malevolent authoritarian control under which he lives, strikingly resembles the Columbine massacre (1999), which actually happened. See also Fight Club (1999). Office Space (1999) provides a more optimistic take: A wizard casts a magic spell on the protagonist to relax his body, which causes him to become unresponsive to the social threats he was previously controlled by. This causes his employer to perceive him as too powerful for his assigned level in the pecking order, and he is promoted to rectify the situation. He learns his friends are going to be laid off, is indignant at the unfairness of this, and gets his friends together to try to steal a lot of money from their employer. This doesn’t go very well, and he eventually decides to trade down to a lower social class instead and join a friend’s construction crew, while his friends remain controlled by social threat.
I’ve noticed that on phone calls with people serving as members of a big bureaucratic organization like a bank or hospital, I can’t get them to do anything by appealing to policies they’re officially required to follow, but talking like I expect them to be afraid of displeasing me sometimes makes things happen. On the positive side, they also seem more compliant if they hear my baby babbling in the background, possibly because it switches them into a state of “here is another human who might have real constraints and want good things, and therefore I sympathize with them”—which implies that their normal on-call state is something quite different.
I’m not sure whether you were intentionally alluding to cops and psychiatrists here, but lots of people effectively experience them as having something like this attitude:
How should someone behave if they’re within one or two standard deviations of average smarts, and think that the authorities think and act like that? I think that’s a legit question and one I’ve done a lot of thinking about, since as someone who’s better-oriented in some ways, I want to be able to advise such people well. You might want to go through the thought experiment of trying to persuade the protagonist of one of the movies I mentioned above to try seasteading, prediction markets, or an online community, instead of the course of action they take in the movie. If it goes well, you have written a fan fic of significant social value. If it goes poorly, you understand why people don’t do that.
I agree that stealing billions while endorsing high-trust behavior might superficially seem like a more reasonable thing to do if you don’t have a good moral theory for why you shouldn’t, and you think effective charities can do an exceptional amount of good with a lot more money. But if you think you live in a society where you can get away with that, then you should expect that wherever you aren’t doing more due diligence than the people you stole from, you’re the victim of a scam.. So I don’t think it really adds up, any more than the other sorts of behaviors you described.
Two years ago, I took a high dose of psychedelic mushrooms and was able to notice the sort of immanent-threat model I described above in myself. It felt as though there was an implied threat to cast me out alone in the cold if I didn’t channel all my interactions with others through an “adult” persona. Since I was in a relatively safe quiet environment with friends in the next room, I was able to notice that this didn’t seem mechanistically plausible, and call the bluff of the internalized threat: I walked into the next room, asked my friends for cuddles, and talked through some of my confusion about the extent to which my social interface with others justified the expense of maintaining an episodic memory. But this took a significant amount of courage and temporarily compromised my balance—my ability to stand up or even feel good sitting on a couch elevated above the ground. Likely most people don’t have the kinds of friends, courage, patience, rational skepticism, theoretical grounding in computer science, evolution, and decision theory, or living situation for that sort of refactoring to go well.
Hmm… firstly, I hope they do not think and act like that. The world looks to me like most people aren’t acting like that most of the time (most people I know have not been killed, though most have been locked in rooms to some extent). If it were true, I’m not sure I believe that it’s of primary importance — just as the person in the proverbial Chinese Room does not understand Chinese, even if many in positions of authority are wantonly cruel and dominating, I still personally experience a lot of freedoms. I’d need to think about what the actual effect is of their intentions, the size, and how changing it or punishing certain consequent behaviors compares to the other list of problems-to-solve.
This suggestion is quite funny, just from reading your description of They Live and seeing the movie poster. On first blush it sounds quite childishly naive on my part to attempt it. But perhaps I will watch the film, think it through some more and figure out more precisely whether I think such a strategy makes any sense or why it would fail.
Initially, to ask such a person to play a longer game, feels like asking them to “keep up the facade” while working on a solution that only has like a 30% chance of working. From your descriptions I anticipate the people in They Live and Office Space to find this too hard after a while and snap (or else they’ll lose their grasp on reality). On the other hand I think people sometimes pull off subterfuges successfully. While we’re talking about films I have not seen, from what I’ve heard Schindler’s List sounds like one where a character noticed his society was enacting distinctly evil policies and strategically worked to combat it without snapping / doing immoral and (to me) crazy things. (Perhaps I will watch that and find out that he does!) I wonder what the key difference there is.
(I will regrettably move on to some other activities for now, construction deadlines are this Monday.)
Maybe this was unclear, but I meant to distinguish two questions so you that you could try to answer one somewhat independently of the other:
1 What determines various authorities’ actions?
2 How should a certain sort of person, with less or different information than you, model the authorities’ actions?
Specifically I was asking you to consider a specific hypothesis as the answer to question 2 - that for a lot of people who aren’t skilled social scientists, the behavior of various authorities can look capricious or malicious even if other people have privileged information that allows them to predict those authorities’ behavior better and navigate interactions with them relatively freely and safely.
To add a bit of precision here, someone who avoids getting hurt by anxiously trying to pass the test (a common strategy in the Rationalist and EA scene) is implicitly projecting quite a bit more power onto the perceived authorities than they actually have, in ways that may correspond to dangerously wrong guesses about what kinds of change in their behavior will provoke what kinds of confrontation. For example, if you’re wrong about how much violence will be applied and by whom if you stop conforming, you might mistakenly physically attack someone who was never going to hurt you, under the impression that it is a justified act of preemption.
On this model, the way in which the behavior of people who’ve decided to stop conforming seems bizarre and erratic to you implies that you have a lot of implicit knowledge of how the world works that they do not. Another piece of fiction worth looking at in this context is Burroughs’s Naked Lunch. I’ve only seen the movie version, but I would guess the book covers the same basic content—the disordered and paranoid perspective of someone who has a vague sense that they’re “under cover” vs society, but no clear mechanistic model of the relevant systems of surveillance or deception.
Not yet answering the central question you asked, but this example is interesting to me, as this both sounds like a severe mistake I have made and also I don’t quite understand how it happens. When anxiously trying to pass the test, what false assumption is the person making about the authority’s power?
I can try to figure it out for myself… I have tried to pass tests (literally, at university) and held it as the standard of a person. I have done this in other situations, holding someone’s approval as the standard to meet and presuming that there is some fair game I ought to succeed at to attain their approval. This is not a useless strategy, even while it might blind me to the ways in which (a) the test is dumb, (b) I can succeed via other mechanisms (e.g. side channels, or playing other games entirely).
In these situations I have attributed to them far too much real power, and later on have felt like I have majorly wasted my time and effort caring about them and their games when they were really so powerless. But I still do not quite see the exact mistake in my cognition, where I went from a true belief to a false one about their powers.
...I think the mistake has to do with identifying their approval as the scoring function of a fair game, when it actually only approximated a fair game in certain circumstances, and outside of that may not be related whatsoever. (“may not be”! — it is of course not related to that whatsoever in a great many situations.) The problem is knowing when someone’s approval is trying to approximate the scoring function of a fair (and worthwhile) game, and when it is not. But I’m still not sure why people end up getting this so wrong.
There’s a common fear response, as though disapproval = death or exile, not a mild diminution in opportunities for advancement. Fear is the body’s stereotyped configuration optimized to prevent or mitigate imminent bodily damage. Most such social threats do not correspond to a danger that is either imminent or severe, but are instead more like moves in a dance that trigger the same interpretive response.
Re-reading my comment, the thing that jumps to mind is that “I currently know of no alternative path to success”. When I am given the option between “Go all in on this path being a fair path to success” and “I know of no path to success and will just have to give up working my way along any particular path, and am instead basically on the path to being a failure”, I find it quite painful to accept the latter, and find it easier on the margin to self-deceive about how much reason I have to think the first path works.
I think a few times in my life (e.g. trying to get into the most prestigious UK university, trying to be a successful student once I got in) I could think of no other path in life I could take than the one I was betting on. This made me quite desperate to believe that the current one was working out okay.
I think “fear’ is an accurate description from my reaction to thinking about the alternative (of failure). Freezing up, not being able to act.
Reality is sufficiently high-dimensional and heterogeneous that if it doesn’t seem like there’s a meaningful “explore/investigate” option with unbounded potential upside, you’re applying a VERY lossy dimensional reduction to your perception.
(I appreciate the reply, I will not get back to this thread until Monday at the earliest. Any ping to reply mid next week is very welcome.)
One more thing: the protagonists of The Matrix and Terry Gilliam’s Brazil (1985) are relatively similar to EAs and Rationalists so you might want to start there, especially if you’ve seen either movie.
I would say that it requires an advanced understanding of economics, incentives, and how society works, rather than trust in people. Understanding how a mechanism work reduces the requirement for trust. (They are complements in my mind.)
I think one of the reasons it would be hard to get a recently jailbroken not-that-intellectual person on-board with such a plan is that it would involve giving them novel understanding of how the world works that they do not have, which somehow people are rarely able to intentionally do, and it can easily fall back to an ask of “trust” that you know something the other person doesn’t, rather than a successful communication of understanding. And then after some number of weeks or months or years the world will introduce enough unpredictable noise that the trust will run out and the person will go back to using the world as they understand it, where they were never going to invent a concept like prediction markets.
...but hey, perhaps I’m not giving them enough credit, and actually they would ask themselves questions like “where does all of the cool technology and inventions around me come from” and start building up a model of science and of successful groups and start figuring out which sorts of reasoning actually work and what sorts of structures in society get good things done on purpose and then start to notice which parts of society can give you more of those powers and then start to see notice things like markets and personal freedoms and building mechanistic world models and more as ways to build up those forces in society.
On the one hand this path can take decades and most humans do not go down it. On the other hand the evidence required to build up a functional worldview is increasingly visible as technological progress has sped up over the centuries and so much of the world is viewable at home on a computer screen. Still, teaching anyone anything on purpose is hard in full generality, for some reason, and just as someone is having a crisis-of-faith is a hard time to have to bet on doing it successfully.
(Aside: This gives a more specific motive to explaining how the world works to a wider audience. “I don’t just think it’s generically nice for everyone to understand the world they live in, but I specifically am hoping that the next person to finally see the ways their society enacts evil doesn’t snap and themself do something stupid and evil, but is instead able to wield the true forces of the world to improve it.”)
I do like your definition of “crazy” that uses “an idea [I / the crazy person] would not endorse later.” I think it dissolves a lot of the eeriness around the word that makes it kind of overly heavy-hitting when used, but also, I think that if you dissolve it in this way, it pretty much incentivizes dropping the word entirely (which I think is a good thing, but maybe not everyone would).
If we define it to mean ideas (not the person) that the person holding them would eventually drop or update to something else, that’s more like what the definition of “wrong” is, and which would apply to literally everyone at different points in their lives and to varying degrees at any time. But then maybe this is too wide, and doesn’t capture the meaning of the word implied in the OP’s question, namely, “why do more people than usual go crazy within EA / Rationality?” Perhaps what is meant by the word in this context is when some people seem to hold wrong ideas that are persistent or cannot be updated later at all. For the record, I am skeptical that this form of “crazy” is really all that prevalent when defined this way.
If we define it as “wrong ideas” (things which won’t be endorsed later) then it does offer a rather simple answer to the OP’s question: EA / Rationality is rather ambitious about testing out new beliefs at the forefront of society, so they will by definition hold beliefs that aren’t held by the majority of people, and which by design, are ambitious and varied enough to be expected to be proven wrong many times over time.
If being ambitious about having new or unusual ideas carries with it accepted risks of being wrong more often than usual, then perhaps a certain level of craziness has to be tolerated as well.