But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally
This is what I mean when I say I don’t think you’ve correctly understood your friend’s point of view. Here is a steelmanning of what I imagine your friend’s point of view to be that has nothing to do with challenging rationality:
“Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y.”
My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
You and your friend probably do not mean the same thing by “rationality.” It seems plausible to me that your friend pattern-matched what it sounded like you were trying to do to scientific racism. Your friend may also have been thinking of the stupid things that Spock does and trying to say “don’t be an idiot like Spock.”
You and your friend probably do not mean the same thing by “rationality.”
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
When you say the word “rationality” to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
I think the word rationality was also relevant to the argument. From one of mszegedy’s comments:
My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.
I think the word rationality was also relevant to the argument.
You make a mistake when you assume rationality to mean LW-style rationality. That’s not what they argued about.
When mszegedy’s friend accused him of applying rationality to society he refered to mszegedy’s argument that one should base social justice on axioms.
According to him the problem with the white supremacist isn’t that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong.
LW-style rationality allows the conclusion: “Rationality is about winning. Groups that based their moral philosophy on strong axioms didn’t win. It’s not rational to base your moral philosophy on strong axioms.”
Mszegedy’s friend got him into a situation where he had no rational argument why he shouldn’t draw that conclusion. He is emotionally repulsed by that conclusion.
Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
Oh, okay. That makes sense. So then what’s the rational thing to conclude at this point? I’m not going to go back and argue with my friend—they’ve had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That’s how my history book used to write it, anyway.)
I’ve mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.
This is what I mean when I say I don’t think you’ve correctly understood your friend’s point of view. Here is a steelmanning of what I imagine your friend’s point of view to be that has nothing to do with challenging rationality:
“Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y.”
You and your friend probably do not mean the same thing by “rationality.” It seems plausible to me that your friend pattern-matched what it sounded like you were trying to do to scientific racism. Your friend may also have been thinking of the stupid things that Spock does and trying to say “don’t be an idiot like Spock.”
Yes, that sounds plausible.
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
When you say the word “rationality” to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There’s a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by “rationality.”
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
I think the word rationality was also relevant to the argument. From one of mszegedy’s comments:
You make a mistake when you assume rationality to mean LW-style rationality. That’s not what they argued about.
When mszegedy’s friend accused him of applying rationality to society he refered to mszegedy’s argument that one should base social justice on axioms.
According to him the problem with the white supremacist isn’t that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong.
LW-style rationality allows the conclusion: “Rationality is about winning. Groups that based their moral philosophy on strong axioms didn’t win. It’s not rational to base your moral philosophy on strong axioms.”
Mszegedy’s friend got him into a situation where he had no rational argument why he shouldn’t draw that conclusion. He is emotionally repulsed by that conclusion.
Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
Oh, okay. That makes sense. So then what’s the rational thing to conclude at this point? I’m not going to go back and argue with my friend—they’ve had enough of it. But what can I take away from this, then?
(I was using the French term philosophe, not omitting a letter, though. That’s how my history book used to write it, anyway.)
I’ve mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.