To be fair Less Wrong’s definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It’s a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)
To clarify what I mean, take the following imaginary conversation:
Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us! Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don’t know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life. Less Wronger: Actually, we’ve defined rationality to mean “winning”, or “winning on purpose” so more rationality is always good. You don’t want be like those crazy normals who lose on purpose, do you? Normal: No, of course I want to succeed at the things I do. Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.
As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.
To be fair Less Wrong’s definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless.
In my experience, the problem is not with disagreeing, but rather that most people won’t even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that’s the end of rationality discourse.
So, for me the main message of LW is this: A better definition of rationality is possible.
We don’t just use ‘winning’ because, well.. ‘winning’ can easily work out to ‘losing’ in real world terms. (think of a person who alienates everyone they meet through their extreme competitiveness. They are focused on winning, to the point that they sacrifice good relations with people. But this is both a) not what is meant by ‘rationalists win’ and b) a highly accessible definition of winning—naive “Competition X exists. Agent A wins, Agent B loses”). VASTLY more accessible than ‘achieving what actually improves your life, as opposed to what you merely want or are under pressure to achieve’
I’d like to use the word ‘winning’, but I think it conveys even less of the intended meaning than ‘rationality’ to the average person.
Yvain criticized switching definitions depending on whether you want to defend an easily defensible position, or have others accept an untenable position.
With Lesswrong’s definition of rationality (epistemic rationality the ability to arrive to true beliefs, instrumental rationality the ability to know how to achieve your goals) how is that happening?
To paraphrase someone else’s example, the motte is that science/reason helps people be right, and the bailey is that the LW memeplex is all correct and the best use of one’s time (the memeplex including maximum support of abstract research about “friendly” AI, frequent attendance of LW self-help events, cryonics, and evangelizing Rationalism).
Here’s the problem with your attempting to apply Motte and Bailey to that:
If challenged on those other things, we do not reply that ‘rationalism is just science/reason helps people be right, how could you possibly oppose it?’ Well, except for the last, which really seems like that actually addresses the problem.
So, it’s just a perfectly ordinary (and acceptable) sequence of progressively more controversial claims, and not a Motte-and-Bailey system.
Different members act as different parts of the motte and bailey: some argue for extreme things; others say those extreme things are not “real” Rationalism
What do you mean exactly by “specifically designed”?
Anyway, I don’t disagree with you exactly.
My original point was not that the LW defnition of rationality was a good or bad definition, but that the definition Algernoq was asserting as the LW consensus definition of rationality was probably not actually true.
ETA: I’m also not sure that I agree with you about the definition being useless, as I think the LW defintion seems designed specifically to counter thinking that leads to someone spending 25% of their time for a car trip planning to save 5%. By explicitly stating that rationality is about winning it helps to not get bogged down in the details and to remember what the point is. Whether or not the definition that has arisen is explicitly designed with that in mind, I can’t say.
thinking that leads to someone spending 25% of their time for a car trip planning to save 5%
I don’t understand this. You’re saying that people spend 25% of their time planning the trip, and save 5% of their time on the trip? (Which is bad, but I doubt is that common)? Or they spend 25% of their time on the trip, and they plan to save 5% of their time on something else? (Which I also doubt is that common). Or that they spend 25% of their time on the trip, and they plan to save 5% of something else, like money? (Which may or may not be bad depending on how time translates to money).
This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I’ve argued that that’s a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it’s far away, nearby stores have an incentive to keep prices low.
This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I’ve argued that that’s a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it’s far away, nearby stores have an incentive to keep prices low.
But if you take into account both price and location when deciding where to shop, stores will have an incentive not only to keep prices low but also to be near where people are!
Stores can’t move closer to where all the people are, however; at some point any incentives from moving close to some people would be countered by moving away from other people. There’s also the problem that past a certain density stores do better when farther away from other stores. Not to mention the transaction costs moving in the first place. Prices don’t have these problems.
I’m not particularly saying anything as I was just referring to the concept introduced in the main post. You’ll have to ask Algernoq as to what the specific intention was.
To be fair Less Wrong’s definition of rationality is specifically designed so that no reasonable person could ever disagree that more rationality is always good, thereby making the definition almost meaningless. And then all the connotations of the word still slip in of course. It’s a cheap tactic also used in the social justice movement which Yvain recently criticized on his blog (motte and bailey I think it was called)
To clarify what I mean, take the following imaginary conversation:
Less Wronger: Hey! You seem smart. You should consider joining the Less Wrong community and learn to become more rational like us!
Normal: (using definition: Rationality means using cold logic and abstract reasoning to solve problems) I don’t know, rationality seems overrated to me. I mean, all the people I know who are best at using cold logic and abstract reasoning to solve problems tend to be nerdy guys who never accomplish much in life.
Less Wronger: Actually, we’ve defined rationality to mean “winning”, or “winning on purpose” so more rationality is always good. You don’t want be like those crazy normals who lose on purpose, do you?
Normal: No, of course I want to succeed at the things I do.
Less Wronger: Great! Then since you agree that more rationality is always good you should join our community of nerdy guys who obsessively use cold logic and abstract reasoning in an attempt to solve their problems.
As usual with the motte and bailey, only the desired definition is used explicitly. However, the connotations with the second mundane use of the word slip in.
In my experience, the problem is not with disagreeing, but rather that most people won’t even consider the LW definition of rationality. They will use the nearest cliche instead, explain why the cliche is problematic, and that’s the end of rationality discourse.
So, for me the main message of LW is this: A better definition of rationality is possible.
It’s not a different definition of rationality. It’s a different word for winning.
If they’re not willing to use “rationality” that way, then just abandon the word.
We don’t just use ‘winning’ because, well.. ‘winning’ can easily work out to ‘losing’ in real world terms. (think of a person who alienates everyone they meet through their extreme competitiveness. They are focused on winning, to the point that they sacrifice good relations with people. But this is both a) not what is meant by ‘rationalists win’ and b) a highly accessible definition of winning—naive “Competition X exists. Agent A wins, Agent B loses”). VASTLY more accessible than ‘achieving what actually improves your life, as opposed to what you merely want or are under pressure to achieve’
I’d like to use the word ‘winning’, but I think it conveys even less of the intended meaning than ‘rationality’ to the average person.
Yvain criticized switching definitions depending on whether you want to defend an easily defensible position, or have others accept an untenable position.
With Lesswrong’s definition of rationality (epistemic rationality the ability to arrive to true beliefs, instrumental rationality the ability to know how to achieve your goals) how is that happening?
So what’s the bailey, here? You make it seem like having obviously true premises is a bad thing.
Note, a progressive series of less firmly held claims are NOT Motte and Bailey, if you aren’t vacillating on what each means.
It’s a problem if anyone ends up sneaking in connotations.
Yes, that’s what an example would look like. Can anyone provide any?
To paraphrase someone else’s example, the motte is that science/reason helps people be right, and the bailey is that the LW memeplex is all correct and the best use of one’s time (the memeplex including maximum support of abstract research about “friendly” AI, frequent attendance of LW self-help events, cryonics, and evangelizing Rationalism).
Here’s the problem with your attempting to apply Motte and Bailey to that:
If challenged on those other things, we do not reply that ‘rationalism is just science/reason helps people be right, how could you possibly oppose it?’ Well, except for the last, which really seems like that actually addresses the problem.
So, it’s just a perfectly ordinary (and acceptable) sequence of progressively more controversial claims, and not a Motte-and-Bailey system.
Different members act as different parts of the motte and bailey: some argue for extreme things; others say those extreme things are not “real” Rationalism
That structure makes it not motte and bailey—the motte must be friendly to the bailey, not hostile to it!
What do you mean exactly by “specifically designed”?
Anyway, I don’t disagree with you exactly.
My original point was not that the LW defnition of rationality was a good or bad definition, but that the definition Algernoq was asserting as the LW consensus definition of rationality was probably not actually true.
ETA: I’m also not sure that I agree with you about the definition being useless, as I think the LW defintion seems designed specifically to counter thinking that leads to someone spending 25% of their time for a car trip planning to save 5%. By explicitly stating that rationality is about winning it helps to not get bogged down in the details and to remember what the point is. Whether or not the definition that has arisen is explicitly designed with that in mind, I can’t say.
I don’t understand this. You’re saying that people spend 25% of their time planning the trip, and save 5% of their time on the trip? (Which is bad, but I doubt is that common)? Or they spend 25% of their time on the trip, and they plan to save 5% of their time on something else? (Which I also doubt is that common). Or that they spend 25% of their time on the trip, and they plan to save 5% of something else, like money? (Which may or may not be bad depending on how time translates to money).
This does sound a little bit like the complaint that people spend 25% of the price of something (rather than of the time) on a car trip to save 5% on the price, but I’ve argued that that’s a form of precommitting where as long as you precommit to buy at the store with the lowest price even if it’s far away, nearby stores have an incentive to keep prices low.
But if you take into account both price and location when deciding where to shop, stores will have an incentive not only to keep prices low but also to be near where people are!
Stores can’t move closer to where all the people are, however; at some point any incentives from moving close to some people would be countered by moving away from other people. There’s also the problem that past a certain density stores do better when farther away from other stores. Not to mention the transaction costs moving in the first place. Prices don’t have these problems.
All I’m saying is it looks like many people are being Rational because it’s fun, not because it’s useful.
I’m not particularly saying anything as I was just referring to the concept introduced in the main post. You’ll have to ask Algernoq as to what the specific intention was.