At first, I felt that ‘nurture’ was a terrible name, because the primary thing I associated with the idea you’re discussing is that we are building up an axiomatised system together. Collaboratively. I’ll say a thing, and you’ll add to it. Lots of ‘yes-and’. If you disagree, then we’ll step back a bit, and continue building where we can both see the truth. If I disagree, I won’t attack your idea, but I’ll simply notice I’m confused about a piece of the structure we’re building, and ask you to add something else instead, or wonder why you’d want to build it that way. I agree this is more nurturing, but that’s not the point. The point is collaboration.
But then my model of Said said “What? I don’t understand why this sort of collaborative exploration isn’t perfectly compatible with combative culture—I can still ask all those questions and make those suggestions” which is a point he has articulated quite clearly down-thread (and elsewhere). So then I got to thinking about the nurturing aspect some more.
I’d characterise combative culture as working best in a professional setting, where it’s what one does as one’s job. When I think of productive combative environments, I visualise groups of experts in healthy fields like math or hard science or computer science. The researchers will bring powerful and interesting arguments forth to each other, but typically they do not discuss nor require an explicit model of how another researcher in their field thinks. And symmetrically, the person responsible for how this researcher thinks is up to them—that’s their whole job! They’ll note they were wrong, and make some updates about what cognitive heuristics they should be using, but not bring that up in the conversation, because that’s not the point of the conversations. The point of the conversation is, y’know, whether the theorem is true, or whether this animal evolved from that, or whether this architecture is more efficient when scaled. Not our emotions or feelings.
Sure, we’ll attack each other in ways that can often make people feel defensive, but in a field where everyone has shown their competence (e.g. PhDs) we have common knowledge of respect for one another—we don’t expect it to actually hurt us to be totally wrong on this issue. It won’t mean I lose social standing, or stop being invited to conferences, or get fired. I mean, obviously it needs to correlate, but never does any single sentence matter or single disagreement decide something that strong. Generally the worst that will happen to you is that you just end up a median scientist/researcher, and don’t get to give the big conference talks. There’s a basic level of trust as we tend to go about our work, that means combative culture is not a real problem.
I think this is good. It’s hard to admit you’re wrong, but if we have common knowledge of respect, then this makes the fear smaller, and I can overcome it.
I think one of the key motivations for nurturing culture is that we don’t have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat’s Last Theorem is true or false:
Will my husband/wife and I want the same things in the next 50 years?
Will my best friends help me keep the up the standard of personal virtue I care about in myself, or will they not notice if I (say) lie to myself more and more?
I’m half way through med school. Is being a doctor actually hitting the heavy tails of impact I could have with my life?
These questions have much more at stake. I know for myself, when addressing them, I feel emotions like fear, anger, and disgust.
Changing my mind on the important decisions in my life, especially those that affect my social standing amongst my friends and community, is really far harder than changing my life about an abstract topic where the results don’t have much direct impact on my life.
Not that computer science or chemistry or math aren’t incredibly hard, it’s just that to do good work in these fields does not require the particular skill of believing things even when they’ll lower your social standing.
I think if you imagine the scientists above turning combative culture to their normal lives (e.g. whether they feel aligned with their husband/wife for the next 50 years), and really trying to do it hard, they’d immediately go through an incredible amount of emotional pain until it was too much to bear and then they’d stop.
If you want someone to be open to radically changing their job, lifestyle, close relationships, etc, some useful things can be:
Have regular conversations with norms such that the person will not be immediately judged if they say something mistaken, or if they consider a hypothesis that you believe to be wrong.
If you’re discussing with them an especially significant belief and whether to change it, keep a track of their emotional state, and help them carefully walk through emotionally difficult steps of reasoning.
If you don’t, they’ll put a lot of effort into finding any other way of shooting themselves in the foot that’s available, rather than realise that something incredibly painful is about to happen to them (and has been happening for many years).
I think that trying to follow this goal to it’s natural conclusions will lead you to a lot of the conversational norms that we’re calling ‘nurturing’.
I think Qiaochu once said something like “If you don’t regularly feel like your soul is being torn apart, you’re not doing rationality right.” Those weren’t his specific words, but I remember the idea being something like that.
I think one of the key motivations for nurturing culture is that we don’t have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat’s Last Theorem is true or false:
I do not really agree with your view here, but I think what you say points to something quite important.
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles. (Possibly this is because giving personal loyalty pre-eminence in one’s value system is difficult, at best, to reconcile with a utilitarian moral framework. This is one of the many reasons I am not a utilitarian.)
One of the benefits of mutual personal loyalty between two people is that they can each expect not to be abandoned, even if the other judges them to be wrong. This is patriotism in microcosm: “my country, right or wrong” scaled down to the relation between individuals—“my friend, right or wrong”. So you say to me: “You are wrong! What you say is false; and what you do is a poor choice, and not altogether an ethical one.” And yet I know that we remain friends; and you will stand by me, and support me, and take risks for me, and make sacrifices for me, if such are called for.
There are limits, of course; thresholds which, if crossed, strain personal loyalty to its limit, and break it. Betrayal of trust is one such. Intentional, malicious action of one’s friend against oneself is another; so is failure to come to one’s aid, in a dark hour. But these are high thresholds. It is near-impossible to exceed them accidentally. (And if you think you know exactly what I’m talking about, then ask yourself: if your friend committed murder, would you turn them in to the police? If the answer is “yes, of course”, then some inferential distance yet remains…)
To a friend like this, you can say, without softening the blow: “Wrong! You’re utterly wrong! This is foolish!”—and without worrying that they will not confide in you, for fear of such a judgment. And from a friend like this, you can hear a judgment like that, and yet remain certain that your friendship is not under the least threat; and so, in a certain important sense, it does not hurt to be judged… no more, at least, than it hurts to judge yourself.
Friendship like this… is it “Nurture Culture”? Or “Combat Culture”?
I think Qiaochu once said something like “If you don’t regularly feel like your soul is being torn apart, you’re not doing rationality right.” Those weren’t his specific words, but I remember the idea being something like that.
The consequence of what I say above is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.
Thanks for your reply, I also do not agree with it but found that it points to something important ideas. (In the past I have tended to frame the conversation more about ‘trust’ rather than ‘personal loyalty’, but I think with otherwise similar effect.)
The first question I want to ask is: how do you get to the stage where personal loyalty is warranted?
From time to time, I think back to the part of Harry Potter and the Philosopher’s Stone where Harry, Hermione and Ron become loyal to one another—the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.
It is after Harry and Ron run in to save Hermione from a troll.
The people who I have the most loyalty to in the world are those who have proven that it is there, with quite costly signals. And this was not a stress-free situation. It involved some pressure on each of our souls, though the important thing was that we came out with our souls intact, and also built something we both thought truly valuable.
So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.
The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?
You say:
The consequence of what I say about is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.
Suppose I am confident that I will not lose my loyal friend.
Here are some updates about the world I might still have to make:
My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community
There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it
The institution I’m rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).
All the years of effort I’ve spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn’t found product-market fit) and I need to give up and start over.
The only world in which I could feel confident that I wouldn’t have to go through any of these updates are one in which the institutions are largely functional, and I feel that rising up my local social incentives will align with my long term goals. This is not what I observe.
Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I’m going through these difficult updates and making these hard decisions.
I am aware this is not a hopeful comment. I do think it is true.
---
Edit: changed ‘achieve your goals while staying rational’ to ‘achieve your goals without regularly feeling like your soul is being torn apart’, which is what I meant to say.
There’s a lot I have to say in response to your comment.
I’ll start with some meta commentary:
From time to time, I think back to the part of Harry Potter and the Philosopher’s Stone where Harry, Hermione and Ron become loyal to one another—the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.
It is after Harry and Ron run in to save Hermione from a troll.
Harry and Ron never ran in to save Hermione from a troll, never became loyal to one another as a result, never built any strength of relationship, and never faced down Voldemort. None of these events ever happened; and Harry, Ron, and Hermione, in fact, never existed.
I know, I know: I’m being pedantic, nitpicking, of course you didn’t mean to suggest that these were actual events, you were only using them as an example, etc. I understand. But as Eliezer wrote:
What’s wrong with using movies or novels as starting points for the discussion? No one’s claiming that it’s true, after all. Where is the lie, where is the rationalist sin? …
Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.
Are the events depicted in Harry Potter and the Philosopher’s Stone—a children’s story about wizards (written by an inexperienced writer)—representative of how actual relationships work, between adults, in our actual reality, which does not contain magic, wizards, or having to face down trolls in between classes? If they are, then you should have no trouble calling to mind, and presenting, illustrative examples from real life. And if you find yourself hard-pressed to do this, well…
Let me speak more generally, and also more directly. As I have previously obliquely suggested, I think it is high time for a moratorium, on Less Wrong, on fictional examples used to illustrate claims about real people, real relationships, real interpersonal dynamics, real social situations, etc. If I had my way, this would be the rule: if you can’t say it without reference to examples from fiction, then don’t say it. (As for using Harry Potter as a source of examples—that should be considered extremely harmful, IMHO.)
That this sort of thing distorts your thinking is, I think, too obvious to belabor, and in any case Eliezer did an excellent job with the above-linked Sequence post. But another problem is that it also muddies communication, such as in the case of this line:
So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.
In the real world, there are no trolls. Clearly, you’re speaking metaphorically. But what is the literal interpretation? What are “trolls”, in this analogy? Precisely? Is it “literal life or death situations, where you risk actually, physically dying?” Surely not… but then—what? I really don’t know. (I have some thoughts on what is and what is not necessary to “get to the stage of true loyalty”, but I really have no desire to respond to a highly ambiguous claim; it seems likely to result in us wasting each other’s time and talking past one another.)
Ok, enough meta, now for some object-level commentary:
The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?
Having loyal friends is not sufficient to achieve your goals, period, without even tacking on any additional criteria. This seems very obvious to me, and it seems unlikely that you wouldn’t have noticed this, so I have to assume I have somehow misunderstood your question. Please clarify.
Here are some updates about the world I might still have to make:
Of the potential updates you list, it seems to me that some of them are not like the others. To wit:
My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community
In my case, I have great difficulty imagining what this would mean for me. I do not think it applies. I don’t know the details of your social situation, but I conjecture that the cure for this sort of possibility is to find your social belonging less in “communities” and more in personal friendships.
There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it
Note that this combines a judgment of fact with… an estimate of effectiveness of a certain projected course of action, I suppose? My suggestion would be to disentangle these things. Once this is done, I don’t see why there should be any more “soul tearing apart” involved here than in any of a variety of other, much more mundane, scenarios.
The institution I’m rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).
Indeed, I have experience with this sort of thing. Knowing that, regardless of the outcome of the decision in question, I would have the unshakable support of friends and family, removed more or less all the “soul tearing apart” from the equation.
All the years of effort I’ve spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn’t found product-market fit) and I need to give up and start over.
Indeed, this can be soul-wrenching. My comment on the previous point applies, though, of course, in this case it does not go nearly as far toward full amelioration as in the previous case. But, of course, this is precisely the sort of situation one should strive to avoid (cf. the principle of least regret). Total avoidance is impossible, of course, and this sort of situation is the (hopefully) rare exception to the heuristic I noted.
Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I’m going through these difficult updates and making these hard decisions.
Meaning no offense, but: if you’re losing significant (and important) social status in any of the situations listed above, then you are, I claim, doing something wrong (specifically, organizing your social environment very sub-optimally).
And in those cases where great strain is unavoidable (such as in the last example you listed), it is precisely a cold, practical, and un-softened judgment, which I most desire and most greatly value, from my closest friends. In such cases—where the great difficulty of the situation is most likely to distort my own rationality—“nurturing” takes considerably less caring and investment, and is much, much less valuable, than true honesty, and a clear-eyed perspective on the situation.
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.
I’m surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.
Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That’s why fraternities have their initiation rituals.
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.
I’m surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.
These sentiments are not only not opposed—they are, in fact, inextricably linked. That this seems surprising to you is… unfortunate; it means the inferential distance between us is great. I am at a loss for how to bridge it, truth be told. Perhaps someone else can try.
Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That’s why fraternities have their initiation rituals.
You cannot hack your way to friendship and loyalty—and (I assert) bad things happen if you try. That you can (sort of) hack your way to a sense of friendship and loyalty is not the same thing (but may prevent you from seeing the fact of the preceding sentence).
I am unsure what you’re asking. What is “this sort of thing”? Do you mean “friendship and loyalty”? I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am. (How much has been written about friendship, and about loyalty? This stuff was old hat to Aristotle…)
These are individual virtues. They are “done”—well or poorly—by individuals. I do not think there is any good way to impose them from above. (You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled. But the question of how to do this is… complex; beyond the scope of this discussion, I think, and in any case not something I have anything approaching a solid grasp on.)
You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled.
I thought that’s what you were talking about: that some ways of organizing people fight or delegitimize personal loyalty considerations, while others work with it or at least figure out how not to actively destroy it. It seemed to me like you were saying that the way Rationalists try to do community tends to be corrosive to this other thing you think is important.
That’s… at once both close to what I’m saying, and also not really what I’m saying at all.
I underestimated the inferential distance here, it seems; it’s surprising to me, how much what I am saying is not obvious. (If anything, I expected the reaction to be more like “ok, yes, duh, that is true and boring and everyone knows this”.)
I may try to write something longer on this topic, but I fear it would have to be much longer; the matters that this question touches upon range wide and deep…
I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am
Sure, but as such, there’s a lot of different approaches to how to do them well (some mutually exclusive), so pinpointing which particular things you’re talking about seems useful.
(I do think I have an idea of what you mean, and might agree, but the thing you’re talking about is probably about as clear to me Ben’s “fight trolls” comment was to you)
Seems fine to table it for now if it doesn’t seem relevant though.
Do you think that existing societal organisations like fraternities aren’t build with the goal of facilitating friendship and loyalty? Do you think they fail at that and produce bad results?
At first, I felt that ‘nurture’ was a terrible name, because the primary thing I associated with the idea you’re discussing is that we are building up an axiomatised system together. Collaboratively. I’ll say a thing, and you’ll add to it. Lots of ‘yes-and’. If you disagree, then we’ll step back a bit, and continue building where we can both see the truth. If I disagree, I won’t attack your idea, but I’ll simply notice I’m confused about a piece of the structure we’re building, and ask you to add something else instead, or wonder why you’d want to build it that way. I agree this is more nurturing, but that’s not the point. The point is collaboration.
But then my model of Said said “What? I don’t understand why this sort of collaborative exploration isn’t perfectly compatible with combative culture—I can still ask all those questions and make those suggestions” which is a point he has articulated quite clearly down-thread (and elsewhere). So then I got to thinking about the nurturing aspect some more.
I’d characterise combative culture as working best in a professional setting, where it’s what one does as one’s job. When I think of productive combative environments, I visualise groups of experts in healthy fields like math or hard science or computer science. The researchers will bring powerful and interesting arguments forth to each other, but typically they do not discuss nor require an explicit model of how another researcher in their field thinks. And symmetrically, the person responsible for how this researcher thinks is up to them—that’s their whole job! They’ll note they were wrong, and make some updates about what cognitive heuristics they should be using, but not bring that up in the conversation, because that’s not the point of the conversations. The point of the conversation is, y’know, whether the theorem is true, or whether this animal evolved from that, or whether this architecture is more efficient when scaled. Not our emotions or feelings.
Sure, we’ll attack each other in ways that can often make people feel defensive, but in a field where everyone has shown their competence (e.g. PhDs) we have common knowledge of respect for one another—we don’t expect it to actually hurt us to be totally wrong on this issue. It won’t mean I lose social standing, or stop being invited to conferences, or get fired. I mean, obviously it needs to correlate, but never does any single sentence matter or single disagreement decide something that strong. Generally the worst that will happen to you is that you just end up a median scientist/researcher, and don’t get to give the big conference talks. There’s a basic level of trust as we tend to go about our work, that means combative culture is not a real problem.
I think this is good. It’s hard to admit you’re wrong, but if we have common knowledge of respect, then this makes the fear smaller, and I can overcome it.
I think one of the key motivations for nurturing culture is that we don’t have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat’s Last Theorem is true or false:
Will my husband/wife and I want the same things in the next 50 years?
Will my best friends help me keep the up the standard of personal virtue I care about in myself, or will they not notice if I (say) lie to myself more and more?
I’m half way through med school. Is being a doctor actually hitting the heavy tails of impact I could have with my life?
These questions have much more at stake. I know for myself, when addressing them, I feel emotions like fear, anger, and disgust.
Changing my mind on the important decisions in my life, especially those that affect my social standing amongst my friends and community, is really far harder than changing my life about an abstract topic where the results don’t have much direct impact on my life.
Not that computer science or chemistry or math aren’t incredibly hard, it’s just that to do good work in these fields does not require the particular skill of believing things even when they’ll lower your social standing.
I think if you imagine the scientists above turning combative culture to their normal lives (e.g. whether they feel aligned with their husband/wife for the next 50 years), and really trying to do it hard, they’d immediately go through an incredible amount of emotional pain until it was too much to bear and then they’d stop.
If you want someone to be open to radically changing their job, lifestyle, close relationships, etc, some useful things can be:
Have regular conversations with norms such that the person will not be immediately judged if they say something mistaken, or if they consider a hypothesis that you believe to be wrong.
If you’re discussing with them an especially significant belief and whether to change it, keep a track of their emotional state, and help them carefully walk through emotionally difficult steps of reasoning.
If you don’t, they’ll put a lot of effort into finding any other way of shooting themselves in the foot that’s available, rather than realise that something incredibly painful is about to happen to them (and has been happening for many years).
I think that trying to follow this goal to it’s natural conclusions will lead you to a lot of the conversational norms that we’re calling ‘nurturing’.
I think Qiaochu once said something like “If you don’t regularly feel like your soul is being torn apart, you’re not doing rationality right.” Those weren’t his specific words, but I remember the idea being something like that.
I do not really agree with your view here, but I think what you say points to something quite important.
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles. (Possibly this is because giving personal loyalty pre-eminence in one’s value system is difficult, at best, to reconcile with a utilitarian moral framework. This is one of the many reasons I am not a utilitarian.)
One of the benefits of mutual personal loyalty between two people is that they can each expect not to be abandoned, even if the other judges them to be wrong. This is patriotism in microcosm: “my country, right or wrong” scaled down to the relation between individuals—“my friend, right or wrong”. So you say to me: “You are wrong! What you say is false; and what you do is a poor choice, and not altogether an ethical one.” And yet I know that we remain friends; and you will stand by me, and support me, and take risks for me, and make sacrifices for me, if such are called for.
There are limits, of course; thresholds which, if crossed, strain personal loyalty to its limit, and break it. Betrayal of trust is one such. Intentional, malicious action of one’s friend against oneself is another; so is failure to come to one’s aid, in a dark hour. But these are high thresholds. It is near-impossible to exceed them accidentally. (And if you think you know exactly what I’m talking about, then ask yourself: if your friend committed murder, would you turn them in to the police? If the answer is “yes, of course”, then some inferential distance yet remains…)
To a friend like this, you can say, without softening the blow: “Wrong! You’re utterly wrong! This is foolish!”—and without worrying that they will not confide in you, for fear of such a judgment. And from a friend like this, you can hear a judgment like that, and yet remain certain that your friendship is not under the least threat; and so, in a certain important sense, it does not hurt to be judged… no more, at least, than it hurts to judge yourself.
Friendship like this… is it “Nurture Culture”? Or “Combat Culture”?
The consequence of what I say above is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.
Thanks for your reply, I also do not agree with it but found that it points to something important ideas. (In the past I have tended to frame the conversation more about ‘trust’ rather than ‘personal loyalty’, but I think with otherwise similar effect.)
The first question I want to ask is: how do you get to the stage where personal loyalty is warranted?
From time to time, I think back to the part of Harry Potter and the Philosopher’s Stone where Harry, Hermione and Ron become loyal to one another—the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.
It is after Harry and Ron run in to save Hermione from a troll.
The people who I have the most loyalty to in the world are those who have proven that it is there, with quite costly signals. And this was not a stress-free situation. It involved some pressure on each of our souls, though the important thing was that we came out with our souls intact, and also built something we both thought truly valuable.
So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.
The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?
You say:
Suppose I am confident that I will not lose my loyal friend.
Here are some updates about the world I might still have to make:
My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community
There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it
The institution I’m rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).
All the years of effort I’ve spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn’t found product-market fit) and I need to give up and start over.
The only world in which I could feel confident that I wouldn’t have to go through any of these updates are one in which the institutions are largely functional, and I feel that rising up my local social incentives will align with my long term goals. This is not what I observe.
Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I’m going through these difficult updates and making these hard decisions.
I am aware this is not a hopeful comment. I do think it is true.
---
Edit: changed ‘achieve your goals while staying rational’ to ‘achieve your goals without regularly feeling like your soul is being torn apart’, which is what I meant to say.
There’s a lot I have to say in response to your comment.
I’ll start with some meta commentary:
Harry and Ron never ran in to save Hermione from a troll, never became loyal to one another as a result, never built any strength of relationship, and never faced down Voldemort. None of these events ever happened; and Harry, Ron, and Hermione, in fact, never existed.
I know, I know: I’m being pedantic, nitpicking, of course you didn’t mean to suggest that these were actual events, you were only using them as an example, etc. I understand. But as Eliezer wrote:
Are the events depicted in Harry Potter and the Philosopher’s Stone—a children’s story about wizards (written by an inexperienced writer)—representative of how actual relationships work, between adults, in our actual reality, which does not contain magic, wizards, or having to face down trolls in between classes? If they are, then you should have no trouble calling to mind, and presenting, illustrative examples from real life. And if you find yourself hard-pressed to do this, well…
Let me speak more generally, and also more directly. As I have previously obliquely suggested, I think it is high time for a moratorium, on Less Wrong, on fictional examples used to illustrate claims about real people, real relationships, real interpersonal dynamics, real social situations, etc. If I had my way, this would be the rule: if you can’t say it without reference to examples from fiction, then don’t say it. (As for using Harry Potter as a source of examples—that should be considered extremely harmful, IMHO.)
That this sort of thing distorts your thinking is, I think, too obvious to belabor, and in any case Eliezer did an excellent job with the above-linked Sequence post. But another problem is that it also muddies communication, such as in the case of this line:
In the real world, there are no trolls. Clearly, you’re speaking metaphorically. But what is the literal interpretation? What are “trolls”, in this analogy? Precisely? Is it “literal life or death situations, where you risk actually, physically dying?” Surely not… but then—what? I really don’t know. (I have some thoughts on what is and what is not necessary to “get to the stage of true loyalty”, but I really have no desire to respond to a highly ambiguous claim; it seems likely to result in us wasting each other’s time and talking past one another.)
Ok, enough meta, now for some object-level commentary:
Having loyal friends is not sufficient to achieve your goals, period, without even tacking on any additional criteria. This seems very obvious to me, and it seems unlikely that you wouldn’t have noticed this, so I have to assume I have somehow misunderstood your question. Please clarify.
Of the potential updates you list, it seems to me that some of them are not like the others. To wit:
In my case, I have great difficulty imagining what this would mean for me. I do not think it applies. I don’t know the details of your social situation, but I conjecture that the cure for this sort of possibility is to find your social belonging less in “communities” and more in personal friendships.
Note that this combines a judgment of fact with… an estimate of effectiveness of a certain projected course of action, I suppose? My suggestion would be to disentangle these things. Once this is done, I don’t see why there should be any more “soul tearing apart” involved here than in any of a variety of other, much more mundane, scenarios.
Indeed, I have experience with this sort of thing. Knowing that, regardless of the outcome of the decision in question, I would have the unshakable support of friends and family, removed more or less all the “soul tearing apart” from the equation.
Indeed, this can be soul-wrenching. My comment on the previous point applies, though, of course, in this case it does not go nearly as far toward full amelioration as in the previous case. But, of course, this is precisely the sort of situation one should strive to avoid (cf. the principle of least regret). Total avoidance is impossible, of course, and this sort of situation is the (hopefully) rare exception to the heuristic I noted.
Meaning no offense, but: if you’re losing significant (and important) social status in any of the situations listed above, then you are, I claim, doing something wrong (specifically, organizing your social environment very sub-optimally).
And in those cases where great strain is unavoidable (such as in the last example you listed), it is precisely a cold, practical, and un-softened judgment, which I most desire and most greatly value, from my closest friends. In such cases—where the great difficulty of the situation is most likely to distort my own rationality—“nurturing” takes considerably less caring and investment, and is much, much less valuable, than true honesty, and a clear-eyed perspective on the situation.
I’m surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.
Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That’s why fraternities have their initiation rituals.
These sentiments are not only not opposed—they are, in fact, inextricably linked. That this seems surprising to you is… unfortunate; it means the inferential distance between us is great. I am at a loss for how to bridge it, truth be told. Perhaps someone else can try.
You cannot hack your way to friendship and loyalty—and (I assert) bad things happen if you try. That you can (sort of) hack your way to a sense of friendship and loyalty is not the same thing (but may prevent you from seeing the fact of the preceding sentence).
What does it look like for this sort of thing to be done well? Can you point to examples?
I am unsure what you’re asking. What is “this sort of thing”? Do you mean “friendship and loyalty”? I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am. (How much has been written about friendship, and about loyalty? This stuff was old hat to Aristotle…)
These are individual virtues. They are “done”—well or poorly—by individuals. I do not think there is any good way to impose them from above. (You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled. But the question of how to do this is… complex; beyond the scope of this discussion, I think, and in any case not something I have anything approaching a solid grasp on.)
I thought that’s what you were talking about: that some ways of organizing people fight or delegitimize personal loyalty considerations, while others work with it or at least figure out how not to actively destroy it. It seemed to me like you were saying that the way Rationalists try to do community tends to be corrosive to this other thing you think is important.
That’s… at once both close to what I’m saying, and also not really what I’m saying at all.
I underestimated the inferential distance here, it seems; it’s surprising to me, how much what I am saying is not obvious. (If anything, I expected the reaction to be more like “ok, yes, duh, that is true and boring and everyone knows this”.)
I may try to write something longer on this topic, but I fear it would have to be much longer; the matters that this question touches upon range wide and deep…
I hope you do find the time to write about this in depth.
Seconded. Would like to hear the in-depth version.
Sure, but as such, there’s a lot of different approaches to how to do them well (some mutually exclusive), so pinpointing which particular things you’re talking about seems useful.
(I do think I have an idea of what you mean, and might agree, but the thing you’re talking about is probably about as clear to me Ben’s “fight trolls” comment was to you)
Seems fine to table it for now if it doesn’t seem relevant though.
Do you think that existing societal organisations like fraternities aren’t build with the goal of facilitating friendship and loyalty? Do you think they fail at that and produce bad results?
It varies. That is a goal for some such organizations.
I think they produce bad results, while failing at the above goal, approximately to the degree that they rely on “emotion hacking”.
Interesstingly there’s currently a highly upvoted question on Academia.StackExchange titled Why don’t I see publications criticising other publications? which suggests that academics don’t engage in combat culture within papers.
Most academic work raises little emotions but that’s not true for all academic work. Ionnadis wrote about how he might be killed for his work.
Whenever work that’s revolutionary in the Khunian sense is done there’s the potential for the loss of social status.