It took me the whole day to figure even that out, really. Stress from other sources was definitely a factor, but what I observed is, whenever I thought about that idea, I got very angry, and got sudden urges to throw heavy things. When I didn’t, I was less angry. I concluded later that I was angry at the idea. I wasn’t sure why (I’m still not completely sure: why would I get angry at an idea, even if it was something that was truly impossible to argue against? a completely irrefutable idea is a very special one; I guess it was the fact that the implications of it being right weren’t present in reality), but it seemed that the idea was making me angry, so I used the general strategy of feeling the idea for any weak points, and seeing whether I could substitute something more logical for inferences, and more likely for assumptions. Which is how I arrived at my conclusions.
Thanks for the explanation. I still think it is more likely that you got angry at, for example, your friend’s dismissive attitude, and thinking about the idea reminded you of it.
why would I get angry at an idea
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
even if it was something that was truly impossible to argue against?
1) I don’t think your friend’s point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results), 2) it’s not obvious to me that you’ve correctly understood your friend’s point of view, 3) I still think you are focusing too much on the semantic content of the conversation.
I don’t think your friend’s point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results)
I’m talking hypothetically. I did allow myself to consider the possibility that the idea was not perfect. Actually, I assumed that until I could prove otherwise. It just seemed pretty hopeless, so I’m considering the extreme.
it’s not obvious to me that you’ve correctly understood your friend’s point of view
Maybe not. I’m not angry at my friend at all, nor was I before. I felt sort of betrayed, but my friend had reasons for thinking things. If (I think) the things or reasons are wrong, I can tell my friend, and then they’ll maybe respond, and if they don’t, then it’s good enough for me that I have a reasonable interpretation of their argument, unless it is going to hurt them that they hold what I believe to be a wrong belief. Then there’s a problem. But I haven’t encountered that yet. But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally, than to try and argue against the idea that everybody who had tried to apply rationality to society had it wrong, which is a very long battle that needs to be fought using history books and citations.
I still think you are focusing too much on the semantic content of the conversation.
Then what else should I focus on?
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
I like having my beliefs challenged, though. That’s what makes me a rationalist in the first place.
Though, I have thought of an alternate hypothesis for why I was offended. My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society. And maybe that offended me. Just because I was like them in that I was trying to apply rationality to society (which I had rational reasons for doing), I was as bad as a white supremacist. Again, I can’t be mad at my friend, since that’s just a belief they hold, and beliefs can change, or be justified. My friend had reasons for holding that belief, and it hadn’t caused any harm to anybody. But maybe that was what was so offensive? That sounds at least equally likely.
But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally
This is what I mean when I say I don’t think you’ve correctly understood your friend’s point of view. Here is a steelmanning of what I imagine your friend’s point of view to be that has nothing to do with challenging rationality:
“Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y.”
My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
You and your friend probably do not mean the same thing by “rationality.” It seems plausible to me that your friend pattern-matched what it sounded like you were trying to do to scientific racism. Your friend may also have been thinking of the stupid things that Spock does and trying to say “don’t be an idiot like Spock.”
You and your friend probably do not mean the same thing by “rationality.”
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
When you say the word “rationality” to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
I think the word rationality was also relevant to the argument. From one of mszegedy’s comments:
My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
I think the word rationality was also relevant to the argument.
You make a mistake when you assume rationality to mean LW-style rationality. That’s not what they argued about.
When mszegedy’s friend accused him of applying rationality to society he refered to mszegedy’s argument that one should base social justice on axioms.
According to him the problem with the white supremacist isn’t that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong.
LW-style rationality allows the conclusion: “Rationality is about winning. Groups that based their moral philosophy on strong axioms didn’t win. It’s not rational to base your moral philosophy on strong axioms.”
Mszegedy’s friend got him into a situation where he had no rational argument why he shouldn’t draw that conclusion. He is emotionally repulsed by that conclusion.
Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
Oh, okay. That makes sense. So then what’s the rational thing to conclude at this point? I’m not going to go back and argue with my friend—they’ve had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That’s how my history book used to write it, anyway.)
I’ve mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.
Humans are emotional creatures. We don’t feel emotions for rational reasons.
The emotion you felt is called cognitive dissonance. It’s something that humans feel when they come to a point where one of their fundemental beliefs is threatened but they don’t have good arguments to back them up.
I think it’s quite valuable to have a strong reference experience of what cognitive dissonance feels like. It’s make it easier to recognize the feeling when you feel it in the future. Whenever you are feeling that feeling, take note of the beliefs in question and examine them more deeply in writing when you are at home.
I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don’t think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn’t think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated.
I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don’t expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.
It took me the whole day to figure even that out, really. Stress from other sources was definitely a factor, but what I observed is, whenever I thought about that idea, I got very angry, and got sudden urges to throw heavy things. When I didn’t, I was less angry. I concluded later that I was angry at the idea. I wasn’t sure why (I’m still not completely sure: why would I get angry at an idea, even if it was something that was truly impossible to argue against? a completely irrefutable idea is a very special one; I guess it was the fact that the implications of it being right weren’t present in reality), but it seemed that the idea was making me angry, so I used the general strategy of feeling the idea for any weak points, and seeing whether I could substitute something more logical for inferences, and more likely for assumptions. Which is how I arrived at my conclusions.
Thanks for the explanation. I still think it is more likely that you got angry at, for example, your friend’s dismissive attitude, and thinking about the idea reminded you of it.
You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs.
1) I don’t think your friend’s point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results), 2) it’s not obvious to me that you’ve correctly understood your friend’s point of view, 3) I still think you are focusing too much on the semantic content of the conversation.
I’m talking hypothetically. I did allow myself to consider the possibility that the idea was not perfect. Actually, I assumed that until I could prove otherwise. It just seemed pretty hopeless, so I’m considering the extreme.
Maybe not. I’m not angry at my friend at all, nor was I before. I felt sort of betrayed, but my friend had reasons for thinking things. If (I think) the things or reasons are wrong, I can tell my friend, and then they’ll maybe respond, and if they don’t, then it’s good enough for me that I have a reasonable interpretation of their argument, unless it is going to hurt them that they hold what I believe to be a wrong belief. Then there’s a problem. But I haven’t encountered that yet. But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally, than to try and argue against the idea that everybody who had tried to apply rationality to society had it wrong, which is a very long battle that needs to be fought using history books and citations.
Then what else should I focus on?
I like having my beliefs challenged, though. That’s what makes me a rationalist in the first place.
Though, I have thought of an alternate hypothesis for why I was offended. My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society. And maybe that offended me. Just because I was like them in that I was trying to apply rationality to society (which I had rational reasons for doing), I was as bad as a white supremacist. Again, I can’t be mad at my friend, since that’s just a belief they hold, and beliefs can change, or be justified. My friend had reasons for holding that belief, and it hadn’t caused any harm to anybody. But maybe that was what was so offensive? That sounds at least equally likely.
This is what I mean when I say I don’t think you’ve correctly understood your friend’s point of view. Here is a steelmanning of what I imagine your friend’s point of view to be that has nothing to do with challenging rationality:
“Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y.”
You and your friend probably do not mean the same thing by “rationality.” It seems plausible to me that your friend pattern-matched what it sounded like you were trying to do to scientific racism. Your friend may also have been thinking of the stupid things that Spock does and trying to say “don’t be an idiot like Spock.”
Yes, that sounds plausible.
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
When you say the word “rationality” to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
I think the word rationality was also relevant to the argument. From one of mszegedy’s comments:
You make a mistake when you assume rationality to mean LW-style rationality. That’s not what they argued about.
When mszegedy’s friend accused him of applying rationality to society he refered to mszegedy’s argument that one should base social justice on axioms.
According to him the problem with the white supremacist isn’t that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong.
LW-style rationality allows the conclusion: “Rationality is about winning. Groups that based their moral philosophy on strong axioms didn’t win. It’s not rational to base your moral philosophy on strong axioms.”
Mszegedy’s friend got him into a situation where he had no rational argument why he shouldn’t draw that conclusion. He is emotionally repulsed by that conclusion.
Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
Oh, okay. That makes sense. So then what’s the rational thing to conclude at this point? I’m not going to go back and argue with my friend—they’ve had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That’s how my history book used to write it, anyway.)
I’ve mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.
Humans are emotional creatures. We don’t feel emotions for rational reasons.
The emotion you felt is called cognitive dissonance. It’s something that humans feel when they come to a point where one of their fundemental beliefs is threatened but they don’t have good arguments to back them up.
I think it’s quite valuable to have a strong reference experience of what cognitive dissonance feels like. It’s make it easier to recognize the feeling when you feel it in the future. Whenever you are feeling that feeling, take note of the beliefs in question and examine them more deeply in writing when you are at home.
I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don’t think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn’t think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated.
I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don’t expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.