This comment will collect things that I think beginner rationalists, “naive” rationalists, or “old school” rationalists (these distinctions are in my head, I don’t expect them to translate) do which don’t help them.
You have an exciting idea about how people could do things differently. Or maybe you think of norms which if they became mainstream would drastically increase epistemic sanity. “If people weren’t so sensitive and attached to their identities then they could receive feedback and handle disagreements, allowing us to more rapidly work towards the truth.” (example picked because versions of this stance have been discussed on LW)
Sometimes the rationalist is thinking “I’ve got no idea how becoming more or less sensitive, gaining a thicker or thinner skin, or shedding or gaining identity works in humans. So I’m just going to black box this, tell people they should change, negatively reinforce them when they don’t, and hope for the best.” (ps I don’t think everyone thinks this, though I know at least one person who does) (most relevant parts in italics)
Comments will be continued thoughts on this behavior.
When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being “overly sensitive” to feedback. I worry about this because it’s happened to me. Not with reactions to feedback but with other things. It’s partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it.
K, I get that thinking a mistake is trivial doesn’t automatically mean your doomed to secretly make it forever. Still, I worry.
The way this can feel to the person being told to change: “None of us care about how hard this is for your, nor the pain you might be feeling right now. Just change already, yeesh.” (it can be true or false that the rationalist actually things this. I think I’ve seen some people playing the rationalist role in this story who explicitly endorsed communicating this sentiment)
Now, I understand that making someone feel emotionally supported takes various levels of effort. Sometimes it might seem like the effort required is not worse the loss in pursing the original rationality target. We could have lots of fruitful discussion about what would be fruitful norms for drawing that line. But I think another problematic thing that can happen, is that in the rationalists rush to get back on track to pursing the important target, they intentionally or unintentionally communicate. “You aren’t really in pain. Or if you are, you shouldn’t be in pain / you suck or are weak for feeling pain right now.” Being told you aren’t in pain SUCCCKS, especially when you’re in pain. Being reprimanded being in pain SUCCCKS, especially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they’re being gaslit about their pain or reprimanded for it.
[a] “You aren’t really in pain. [b] Or if you are, you shouldn’t be in pain / you suck or are weak for feeling pain right now.” [a] Being told you aren’t in pain SUCCCKS, especially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they’re being [a’] gaslit about their pain.
The dialogue refers to two possibilities, A and B, but only A is referenced afterwards. (I wonder what the word for ‘telling people their pain doesn’t matter’ is.)
Non-rhetorical. The spelling suggestion suggests an improvement which largely unambiguous/style-agnostic. Suggesting adding a word requires choosing a word—a matter which is ambiguous/style dependent. Sometimes writing contains grammatical errors—but when people other than the author suggest fixes, the fixes don’t have the same voice. This is why I included a prompt for what word you (Hazard) would use.
For clarity, I can make less vague comments in the future. What I wanted to say rephrased:
they intentionally or [un]intentionally communicate:
“You aren’t really in pain. Or if you are, you shouldn’t be in pain / you suck or are weak for feeling pain right now.” Being told you aren’t in pain SUCCCKS, especially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they’re being gaslit about[/mocked for] their pain.
Here the [] serve one purpose—suggesting improvement, even when there’s multiple choices.
If you really had no idea… fine, can’t do much better than trying to operant conditioning a person towards the end goal. In my world, getting a deep understanding of how to change is the biggest goal/point of rationality (I’ve given myself away, I care about AI Alignment less than you do ;).
So trying to skip to the rousing debate and clash of ideas while just hoping everyone figures out how to handle it feels like leaving most of the work undone.
Meta note: Me upvoting the comment above could make things go out of order.
operant conditioning
It could also be seen as selection—get rid of the people who aren’t X. This risks getting rid of people who might learn, which could be an issue if the goal of that place (whether it’s LW, SSC, or etc.) includes learning.
An organization, consisting only of people who have a PhD might be an interesting place, perhaps enabling collaboration and cutting edge work that couldn’t be done anywhere else. But without a place where people can get a Phd, eventually there will be no such organizations.
(Meta: the order wasn’t important, thanks for thinking about that though)
The selection part is something else I was thinking about. One of my thoughts was your “If there’s no way to train PhDs, they die out.” And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I’m drawing from my day to day distribution, and don’t have thoughts about how thick skinned most LW people are or aren’t.
Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you’re excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by “solid logical arguments”). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself.
Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go “Oh. There are steps to get from A to B. I can’t expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail.”
I’ve been thinking about this as a general pattern, and have specifically filled in “you should be thick skinned” to make it concrete. Here’s a thought that applies to this concrete example that doesn’t necessarily apply to the general pattern.
There’s all sorts of reasons why someone might feel hurt, put-off, or upset about how someone gives them feedback or disagrees with them. One of these ways can be something like, “From past experience I’ve learned that someone how uses XYZ language or ABC tone of voice is saying what they said to try and be mean to me, and they will probably try to hurt and bully me in the future.”
If you are the rationalist in this situation, you’re annoyed that someone thinks you’re a bully. You aren’t a bully! And it sure would suck if they convinced other people that you were a bully. So you tell them that, duh, you aren’t trying to be mean that this is just how you talk, and that they should trust you.
If your the person being told to change, you start to get even more worried (after all, this is exactly what you piece of shit older brother would do to you), this person is telling to trust that they aren’t a bully when you have no reason to, and you’re worried they’re going to turn the bystanders against you.
Hmmmm, after writing this out the problem seems much harder to deal with than I first thought.
This comment will collect things that I think beginner rationalists, “naive” rationalists, or “old school” rationalists (these distinctions are in my head, I don’t expect them to translate) do which don’t help them.
You have an exciting idea about how people could do things differently. Or maybe you think of norms which if they became mainstream would drastically increase epistemic sanity. “If people weren’t so sensitive and attached to their identities then they could receive feedback and handle disagreements, allowing us to more rapidly work towards the truth.” (example picked because versions of this stance have been discussed on LW)
Sometimes the rationalist is thinking “I’ve got no idea how becoming more or less sensitive, gaining a thicker or thinner skin, or shedding or gaining identity works in humans. So I’m just going to black box this, tell people they should change, negatively reinforce them when they don’t, and hope for the best.” (ps I don’t think everyone thinks this, though I know at least one person who does) (most relevant parts in italics)
Comments will be continued thoughts on this behavior.
When I see this behavior, I worry that the rationalist is setting themselves up to have a blindspot when it comes themselves being “overly sensitive” to feedback. I worry about this because it’s happened to me. Not with reactions to feedback but with other things. It’s partially the failure mode of thinking that some state is beneath you, being upset and annoyed at others for being in that state, and this disdain making it hard to see when you engage in it.
K, I get that thinking a mistake is trivial doesn’t automatically mean your doomed to secretly make it forever. Still, I worry.
The way this can feel to the person being told to change: “None of us care about how hard this is for your, nor the pain you might be feeling right now. Just change already, yeesh.” (it can be true or false that the rationalist actually things this. I think I’ve seen some people playing the rationalist role in this story who explicitly endorsed communicating this sentiment)
Now, I understand that making someone feel emotionally supported takes various levels of effort. Sometimes it might seem like the effort required is not worse the loss in pursing the original rationality target. We could have lots of fruitful discussion about what would be fruitful norms for drawing that line. But I think another problematic thing that can happen, is that in the rationalists rush to get back on track to pursing the important target, they intentionally or unintentionally communicate. “You aren’t really in pain. Or if you are, you shouldn’t be in pain / you suck or are weak for feeling pain right now.” Being told you aren’t in pain SUCCCKS, especially when you’re in pain. Being reprimanded being in pain SUCCCKS, especially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other person adequate emotional support, the least you can do is not make them think they’re being gaslit about their pain or reprimanded for it.
Errata.
or [un]intentionally communicate:
The dialogue refers to two possibilities, A and B, but only A is referenced afterwards. (I wonder what the word for ‘telling people their pain doesn’t matter’ is.)
Yeah, I only talked about A after. Is the parenthetical rhetorical? If not I’m missing the thing you want to say.
Non-rhetorical. The spelling suggestion suggests an improvement which largely unambiguous/style-agnostic. Suggesting adding a word requires choosing a word—a matter which is ambiguous/style dependent. Sometimes writing contains grammatical errors—but when people other than the author suggest fixes, the fixes don’t have the same voice. This is why I included a prompt for what word you (Hazard) would use.
For clarity, I can make less vague comments in the future. What I wanted to say rephrased:
Here the [] serve one purpose—suggesting improvement, even when there’s multiple choices.
Aaaah, I see now. Just edited to what I think fits.
If you really had no idea… fine, can’t do much better than trying to operant conditioning a person towards the end goal. In my world, getting a deep understanding of how to change is the biggest goal/point of rationality (I’ve given myself away, I care about AI Alignment less than you do ;).
So trying to skip to the rousing debate and clash of ideas while just hoping everyone figures out how to handle it feels like leaving most of the work undone.
Meta note: Me upvoting the comment above could make things go out of order.
It could also be seen as selection—get rid of the people who aren’t X. This risks getting rid of people who might learn, which could be an issue if the goal of that place (whether it’s LW, SSC, or etc.) includes learning.
An organization, consisting only of people who have a PhD might be an interesting place, perhaps enabling collaboration and cutting edge work that couldn’t be done anywhere else. But without a place where people can get a Phd, eventually there will be no such organizations.
(Meta: the order wasn’t important, thanks for thinking about that though)
The selection part is something else I was thinking about. One of my thoughts was your “If there’s no way to train PhDs, they die out.” And the other was me being a bit skeptical of how big the pool would be right this second if we adopted a really thick skin policy. Reflecting on that second point, I realize I’m drawing from my day to day distribution, and don’t have thoughts about how thick skinned most LW people are or aren’t.
Thought that is related to this general pattern, but not this example. Think of having an idea of an end skill that you’re excited by (doing bayes updates irl, successfully implementing TAPs, being swayed by “solid logical arguments”). Also imagine not having a theory of change. I personally have sometimes not noticed that there is or could be an actual theory of how to move from A to B (often because I thought I should already be able to do that), and so would use the black box negative reinforcement strategy on myself.
Being in that place involved being stuck for a while and feeling bad about being stuck. Progress was only made when I managed to go “Oh. There are steps to get from A to B. I can’t expect to already know them. I most focus on understanding this progression, and not on just punishing myself whenever I fail.”
I’ve been thinking about this as a general pattern, and have specifically filled in “you should be thick skinned” to make it concrete. Here’s a thought that applies to this concrete example that doesn’t necessarily apply to the general pattern.
There’s all sorts of reasons why someone might feel hurt, put-off, or upset about how someone gives them feedback or disagrees with them. One of these ways can be something like, “From past experience I’ve learned that someone how uses XYZ language or ABC tone of voice is saying what they said to try and be mean to me, and they will probably try to hurt and bully me in the future.”
If you are the rationalist in this situation, you’re annoyed that someone thinks you’re a bully. You aren’t a bully! And it sure would suck if they convinced other people that you were a bully. So you tell them that, duh, you aren’t trying to be mean that this is just how you talk, and that they should trust you.
If your the person being told to change, you start to get even more worried (after all, this is exactly what you piece of shit older brother would do to you), this person is telling to trust that they aren’t a bully when you have no reason to, and you’re worried they’re going to turn the bystanders against you.
Hmmmm, after writing this out the problem seems much harder to deal with than I first thought.