So we have lots of guides on how to be rational… but do we have any materials that consider what makes a person decide to pursue rationality and consciously decide to adopt rationality as an approach to life?
Recently I was talking to someone and realised they didn’t accept that a rational approach was always the best one, and it was harder than I expected to come up with an argument that would be compelling for someone that didn’t think rationality was all that worthwhile… not neccessarily irrational, but just not a conscious follower/advocate of it. I think a lot of the arguments for it are actually quite philosophical or in some people’s case mathematical. Got me thinking, what actually turns someone into a rationality fan? A rational argument? Oh wait....
I’ve got some ideas, but nothing I’d consider worth writing down at this stage… is there anything to prevent wheel reinvention?
I would disagree and say that people who look for ways to “become rational” in the LessWrong sense are just exposed to a class of internet-based advice systems (like lifehacker and similar) that promote the idea that you can “hack” things to make them better. Rationality is the ultimate lifehack; it’s One Weird Trick to Avoid Scope Insensitivity.
Outside of this subculture, people look for ways to improve all the time; people even look for ways to improve globally all the time. The way they do this isn’t always “rational,” or even effective, but if rationality is winning, it’s clear that people look for ways to win all the time. They might do this by improving their communication skills, or their listening skills, or trying to become “centered” or “balanced” in some way that will propagate out to everything they do.
Since they were more rational already they could observe the rational approach had better outcomes. Irrational people presumably can’t do that. You’d have to appeal to their irrationality to make a case for rationality and I’m not sure how that’d work out.
I expect this is mainly a disagreement about definitions? Many people think of “rationality” as referring to system-2 type thinking specifically, which isn’t universally applicable and wouldn’t actually be the best approach in many situations. Whereas the LessWrong definition is that Rationality is Systematized Winning, which may call for intuition and system-1 at times, depending on what the best approach actually is. With that definition given, I don’t think selling “rationality” to people is something that needs to be done—they might then start dismissing the particular rationality technique you’re trying to get them to use as “irrational”, but presumably you’re ready for that argument.
So you mean the person who I was talking to had a different definition of rationality? I wonder whether most people feel the definition is quite subjective? That would actually be quite troubling when you think of it.
I actually instensely dislike that way of expressing it, mainly because argumentative competitiveness is a massive enemy of rationality. For me rationality probably comes down to instrumental truthiness :-)
I wonder whether most people feel the definition is quite subjective?
“subjective” comes with a bunch of connotations that aren’t applicable.
If you look at the paper that defined evidence-based medicine you find that it talks about deemphasizing intuition. In the 20 years since that paper got published we learned a lot more about intuition and that intuition is actually quite useful. LessWrong Rationality is a 21st century ideology that takes into account new ideas.
It’s not what someone would have meant 20 years ago when he said “rationality” because certain knowledge didn’t exist 20 years ago.
CFAR’s vision page is also a good summary of what this community considers rationality to be about.
You will find that Scotts article that summarizes the knowledge that LW produced doesn’t even use the word logic. The CFAR vision uses the word one time but only near the bottom of the article.
One of the core insights of LW is that teaching people who want to be rational to be rational isn’t easy. We don’t have an issue guide to rationality that we can give people and then they become rational.
When it comes to winning other people, most people do have goals that they care about. If you tell the body builder about the latest research on supplements or muscle building, then he’s going to be interested. Have that knowledge makes him for effective at the goals that he cares about. For him that knowledge isn’t useless nerdy stuff.
As far as rationality is about winning, the body builder cares about winning in the domain of muscle building.
Of course you also have to account for status effects. Some people pretend to care about certain goals but are not willing to actually efficiently pursue those goals.
There not any point where someone has to self identify as rationalist.
Thanks that’s interesting. Scott is always a good read.
Again, I’d have to disagree that the “winning” paradigm is useful in encouraging rational thought. Irrational thought can in many instances at least appear to be a good strategy for what the average person undestands as “winning”, and it additionally evokes a highly competitve psychological state that is a a major source of bias.
A used car salesperson convincing themselves that what they’re selling isn’t a piece of crud is an example of where irrationality is a “good” (effective) strategy. I don’t think that’s what we are trying to encourage here. That’s why I say instrumental truthiness—the truth part is important too.
I also maintain that focus on “winning” is psychologically in conflict with truth seeking. Politics = mind killer is best example.
I think the orthodox LW view would be that this used car salesperson might have an immoral utility function but that he isn’t irrational.
I also maintain that focus on “winning” is psychologically in conflict with truth seeking.
That basically means that sometimes the person who seeks the truth doesn’t win. That outcome isn’t satisfactory to Eliezer.
In Rationality is Systematized Winning he writes:
If the “irrational” agent is outcompeting you on a systematic and predictable basis, then it is time to reconsider what you think is “rational”.
Of course you can define rationality for yourself differently but it’s a mistake to project your own goals on others.
I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs. Wouldn’t we call LW an effectiveness forum, rather than a rationalist forum in that case?
That basically means that sometimes the person who seeks the truth doesn’t win.
I think you’re reading too much into that one quite rhetorical article, but I acknowledge he prioritises “winning” quite highly. I think he ought to revise that view. Trying to win with false beliefs risks not achieving your goals, but being oblivious to that fact. Like a mad person killing their friends because he/she thinks they’ve turned into evil dog-headed creatures or some such (exaggeration to illustrate my point).
Of course you can define rationality for yourself differently but it’s a mistake to project your own goals on others.
Fair point. And maybe you’re right I’m in the minority… I’m still not yet certain. I do note that upvotes does not indicate agreement, only a feeling that the article is an interesting read etc. Also, I note many comments disagree with article. It warrants further investigation for me though.
I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs.
Often they use “instrumental rationality” for that meaning and “epistemic rationality” for the other one. Searching this site for epistemic instrumental returns some relevant posts.
I think this is very important, I myself noticed that when I was younger, the longer I was unemployed, the more I started reading about socialist ideas and getting into politics. Then when I started working again it went out the window and I moved on to learning about other things.
Similarly, maybe I’m here because I just happened to be in the mood to read some fan fiction that day?
When explaining/arguing for rationality with the non-rational types, I have to resort to non-rational arguments. This makes me feel vaguely dirty, but it’s also the only way I know of to argue with people who don’t necessarily value evidence in their decision making. Unsurprisingly, many of the rationalists I know are unenthused by these discussions and frequently avoid them because they’re unpleasant. It follows that the first step is to stop avoiding arguments/discussions with people of alternate value systems, which is really just a good idea anyway.
Cultivating a group identity and a feeling of superiority to the outgroup will definitely be conducive to clear-headed analysis of tactics/strategies for winning regardless of their origins/thedish affiliations/signals, and to evaluation of whether aspects of the LW memeplex are useful for winning.
Well it usually takes the form of people telling you that being highly rational is “over-analysing” or that logic is cold and ignores important emotional considerations of various kinds, or that focusing on rationality ignores the reality that people aren’t machines or that they don’t want to live such a cold and clinical life etc etc. Basically its just “I don’t want to be that rational”. So I wonder, what makes people honestly think “I want to be very rational”? (grammar aplogies lol)
Ah, I have met those kind of people. Usually I get the same feeling as when someone is debating politics, leading me to assume that the rejection of rationality is signaling belonging to a certain tribe, one where it is important that everyone feel good about themselves or such.
Personally, I was raised to think and think critically so I can’t draw from personal experience. What did convince the ancient Greeks to embrace rationality, to start question the world around them? Maybe we should look there.
Yeah its useless to try to rationally argue for rationality with someone that doesn’t authentically accept the legitimacy of rationality in the first place. I guess all of us are like this to some degree, but some more than others for certain.
Not a bad suggestion. I know a little about the Ancient Greek philosophers, though nothing specific springs to mind.
Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don’t figure this out for ourselves, it might be other rational people who tell us about this. If someone says:
“If you’re trying to solve this problem, I believe you’re over-analyzing it. Try paying more attention to your feelings, as they might indicate what you really want to do.”
how are we supposed to tell if what they’re saying is a:
someone trying to genuinely help us solve our problem(s) in a rational way
or
someone dismissing attempts at analyzing a problem at all?
It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn’t preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they’re just strawmanning rationality, or really trying to help us achieve a more rational response?
This isn’t some rhetorical question for you. I’ve got the same concerns as you, and I’m not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?
I like the exporation of how emotions interact with rationality that seems to be going on over there.
For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I’d probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn’t come to bear on the question of over-analysis generally. However, I’d heartily agree with the proposition that emotions are a good topic of exploration and study because they provide good option selection in certain situations and because knowledge of them might help control and account for emotionally based cognitive bias.
I guess the above would inform the question of whether the person you describe is rationally helping or just strawmanning. My sense is that in many cases the term is thrown around as a kind of defence against the mental discomfort that deep thought and the changing of ideas might bring, but perhaps I’m being too judgemental. Other times of course the person is actually identifying hand-wringing and inaction that we’re too oblivious to identify ourselves.
In terms of identification of true goals, I wonder if the contextuality and changability of emotion would render it a relevent but ultimately unreliable source of deriving true goals. For example, in a fierce conflict its fairly tempting to perceive your goals as fundamentally opposed or opposite to your opponents, but I wonder if that’s really a good position to form.
In the end though, people’s emotions are relevent in their perception of their goals, so I suspect we do have to address emotions in the case for rationality.
Does CFAR have its own discussion forum? I can’t see any so far? Do you know what CFAR thinks about the “winning” approach held by many LWers?
CFAR has its own private mailing list, which isn’t available to individuals who haven’t attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.
I think I’d be quite interested to know what % of CRAF people believe that rationality ought to include a component of “truthiness”. Anything that could help on that?
Let’s see how basic I can go with an argument for rationality without using anything that needs rationality to explain. First the basic form:
Rationality is an effective way of figuring out what is and isn’t true. Therefore rational people end up knowing the truth more often. Knowing the truth more often helps you make plans that work. Plans that work allow you to acquire money/status/power/men/women/happiness.
Now to dress it up in some rhetoric:
My friend, have you ever wished you could be the best you? The one who knows the best way to do everything, cuts to the truth of the matter, saves the world and then gets the girl/wins the man? That’s what Rationalism looks like, but first one must study the nature of truth in order to cleave reality along its weaknesses and then bend it to your whims. You can learn the art a step stronger than science, the way that achieves the seemingly impossible. You can build yourself into that best you, a step at a time, idea upon idea until you look down the mountain you have climbed and know you have won.
I think I’m broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I’d not reject your approach at this point.
Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn’t about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if “Truthiness” is only a tool that can be even be discarded when neccessary. I find that view curious.
I’m not sure they’re wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I’m not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I’m considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don’t want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they’re doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can’t test this hypothesis one way or the other without attempting to teach an average person rationalism and I don’t know if anyone has done that, nor how I would find out if they had.
So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it. In the arena of CEO compensation for example increased transparency leads to runaway competition between CEOs to have the highest salary, shafting everyone else. Sure, the truth is known but it has only made things worse. I’m fairly consequentialist like that.
Note that in this situation I’d still call for transparency on war crimes, torture and so on. The earlier the better. If a person knows that their actions will become known within 5 years and that it will effect them personally that somewhat constrains their actions against committing an atrocity. The people making the decisions obviously need accurate data to make said decisions with in all cases but the good or damage caused by the public availability of that data is another thing entirely. Living in a world where everyone was rationalists and the truth never caused problems would be nice, but that’s the should-world not the is-world.
It so happens that in this world we live using these brains we have, seeking the truth and not being satisfied with a lie or a distortion is an extremely effective way to gain power over the world. With our current hardware truth seeking may be the best way to understand enough to get things done without self-deception, but seeking the truth itself is incidental to the real goal.
Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic:
http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Well thats semantics in a pretty casual post. Still the link is interesting, thanks. I wonder if anyone has offered a counter-argument along the lines of “rationality is a muscle, not a scarce resource”. But what do you do with someone who doesn’t even think that, but just thinks logic is something for nerds?
No, it’s substantial criticism. “Rationality fan” brings up in me the image of a person who aspires to be a Vulcan and who cares about labels instead of caring about outcomes.
The person who deconverted to theism and now makes atheism his new religion without really adopting good thinking habits.
But what do you do with someone who doesn’t even think that, but just thinks logic is something for nerds?
Even the body builder who doesn’t consider logic to be very important and a subject for nerds might be interested in information from scientific studies about the effects of the supplements he takes.
Ok feel free to mentally replace my language with more sensible language. This was just a quick post in the open thread. Thanks for your substantial if somewhat contrarian comment.
So we have lots of guides on how to be rational… but do we have any materials that consider what makes a person decide to pursue rationality and consciously decide to adopt rationality as an approach to life?
Recently I was talking to someone and realised they didn’t accept that a rational approach was always the best one, and it was harder than I expected to come up with an argument that would be compelling for someone that didn’t think rationality was all that worthwhile… not neccessarily irrational, but just not a conscious follower/advocate of it. I think a lot of the arguments for it are actually quite philosophical or in some people’s case mathematical. Got me thinking, what actually turns someone into a rationality fan? A rational argument? Oh wait....
I’ve got some ideas, but nothing I’d consider worth writing down at this stage… is there anything to prevent wheel reinvention?
People who look for ways to become more rational are probably far more rational than average already.
I don’t find this obvious. Why do you think this?
It makes me feel good.
I would disagree and say that people who look for ways to “become rational” in the LessWrong sense are just exposed to a class of internet-based advice systems (like lifehacker and similar) that promote the idea that you can “hack” things to make them better. Rationality is the ultimate lifehack; it’s One Weird Trick to Avoid Scope Insensitivity.
Outside of this subculture, people look for ways to improve all the time; people even look for ways to improve globally all the time. The way they do this isn’t always “rational,” or even effective, but if rationality is winning, it’s clear that people look for ways to win all the time. They might do this by improving their communication skills, or their listening skills, or trying to become “centered” or “balanced” in some way that will propagate out to everything they do.
Agreed. So basically, what made them look?
Since they were more rational already they could observe the rational approach had better outcomes. Irrational people presumably can’t do that. You’d have to appeal to their irrationality to make a case for rationality and I’m not sure how that’d work out.
I usually don’t use the term “rational”/”rationality” that much, and would rather talk about things like “being effective at what you care about”.
I expect this is mainly a disagreement about definitions? Many people think of “rationality” as referring to system-2 type thinking specifically, which isn’t universally applicable and wouldn’t actually be the best approach in many situations. Whereas the LessWrong definition is that Rationality is Systematized Winning, which may call for intuition and system-1 at times, depending on what the best approach actually is. With that definition given, I don’t think selling “rationality” to people is something that needs to be done—they might then start dismissing the particular rationality technique you’re trying to get them to use as “irrational”, but presumably you’re ready for that argument.
So you mean the person who I was talking to had a different definition of rationality? I wonder whether most people feel the definition is quite subjective? That would actually be quite troubling when you think of it.
I actually instensely dislike that way of expressing it, mainly because argumentative competitiveness is a massive enemy of rationality. For me rationality probably comes down to instrumental truthiness :-)
“subjective” comes with a bunch of connotations that aren’t applicable.
If you look at the paper that defined evidence-based medicine you find that it talks about deemphasizing intuition. In the 20 years since that paper got published we learned a lot more about intuition and that intuition is actually quite useful. LessWrong Rationality is a 21st century ideology that takes into account new ideas. It’s not what someone would have meant 20 years ago when he said “rationality” because certain knowledge didn’t exist 20 years ago.
OK but perhaps there is a core definition that defines what new aspects can be integrated.
http://slatestarcodex.com/2014/03/13/five-years-and-one-week-of-less-wrong/ should be worth reading to get up to speed on the currrent LW ideology.
CFAR’s vision page is also a good summary of what this community considers rationality to be about.
You will find that Scotts article that summarizes the knowledge that LW produced doesn’t even use the word logic. The CFAR vision uses the word one time but only near the bottom of the article.
One of the core insights of LW is that teaching people who want to be rational to be rational isn’t easy. We don’t have an issue guide to rationality that we can give people and then they become rational.
When it comes to winning other people, most people do have goals that they care about. If you tell the body builder about the latest research on supplements or muscle building, then he’s going to be interested. Have that knowledge makes him for effective at the goals that he cares about. For him that knowledge isn’t useless nerdy stuff. As far as rationality is about winning, the body builder cares about winning in the domain of muscle building.
Of course you also have to account for status effects. Some people pretend to care about certain goals but are not willing to actually efficiently pursue those goals. There not any point where someone has to self identify as rationalist.
Thanks that’s interesting. Scott is always a good read.
Again, I’d have to disagree that the “winning” paradigm is useful in encouraging rational thought. Irrational thought can in many instances at least appear to be a good strategy for what the average person undestands as “winning”, and it additionally evokes a highly competitve psychological state that is a a major source of bias.
If you consider good strategies to be irrational than you mean something different with rational than what the term usually refers to on LW.
A used car salesperson convincing themselves that what they’re selling isn’t a piece of crud is an example of where irrationality is a “good” (effective) strategy. I don’t think that’s what we are trying to encourage here. That’s why I say instrumental truthiness—the truth part is important too.
I also maintain that focus on “winning” is psychologically in conflict with truth seeking. Politics = mind killer is best example.
I think the orthodox LW view would be that this used car salesperson might have an immoral utility function but that he isn’t irrational.
That basically means that sometimes the person who seeks the truth doesn’t win. That outcome isn’t satisfactory to Eliezer. In Rationality is Systematized Winning he writes:
Of course you can define rationality for yourself differently but it’s a mistake to project your own goals on others.
A recent article title Truth, it’s not that great got 84% upvotes on LW.
I am suprised that a significant group of people think that rationality is inclusive of useful false beliefs. Wouldn’t we call LW an effectiveness forum, rather than a rationalist forum in that case?
I think you’re reading too much into that one quite rhetorical article, but I acknowledge he prioritises “winning” quite highly. I think he ought to revise that view. Trying to win with false beliefs risks not achieving your goals, but being oblivious to that fact. Like a mad person killing their friends because he/she thinks they’ve turned into evil dog-headed creatures or some such (exaggeration to illustrate my point).
Fair point. And maybe you’re right I’m in the minority… I’m still not yet certain. I do note that upvotes does not indicate agreement, only a feeling that the article is an interesting read etc. Also, I note many comments disagree with article. It warrants further investigation for me though.
Often they use “instrumental rationality” for that meaning and “epistemic rationality” for the other one. Searching this site for
epistemic instrumental
returns some relevant posts.I think this is very important, I myself noticed that when I was younger, the longer I was unemployed, the more I started reading about socialist ideas and getting into politics. Then when I started working again it went out the window and I moved on to learning about other things.
Similarly, maybe I’m here because I just happened to be in the mood to read some fan fiction that day?
When explaining/arguing for rationality with the non-rational types, I have to resort to non-rational arguments. This makes me feel vaguely dirty, but it’s also the only way I know of to argue with people who don’t necessarily value evidence in their decision making. Unsurprisingly, many of the rationalists I know are unenthused by these discussions and frequently avoid them because they’re unpleasant. It follows that the first step is to stop avoiding arguments/discussions with people of alternate value systems, which is really just a good idea anyway.
Let’s call them “people”.
You’re right, that was uncalled for and I retract that statement.
Cultivating a group identity and a feeling of superiority to the outgroup will definitely be conducive to clear-headed analysis of tactics/strategies for winning regardless of their origins/thedish affiliations/signals, and to evaluation of whether aspects of the LW memeplex are useful for winning.
Mudblood detected!!!
:-)
Seriously though, agree agree.
I feel better about my actions when I can justify them with arguments.
But to be honest, I have never met someone who regards rationality as not worthwhile. Or maybe I have just forgotten the experience.
Well it usually takes the form of people telling you that being highly rational is “over-analysing” or that logic is cold and ignores important emotional considerations of various kinds, or that focusing on rationality ignores the reality that people aren’t machines or that they don’t want to live such a cold and clinical life etc etc. Basically its just “I don’t want to be that rational”. So I wonder, what makes people honestly think “I want to be very rational”? (grammar aplogies lol)
Ah, I have met those kind of people. Usually I get the same feeling as when someone is debating politics, leading me to assume that the rejection of rationality is signaling belonging to a certain tribe, one where it is important that everyone feel good about themselves or such.
Personally, I was raised to think and think critically so I can’t draw from personal experience. What did convince the ancient Greeks to embrace rationality, to start question the world around them? Maybe we should look there.
Yeah its useless to try to rationally argue for rationality with someone that doesn’t authentically accept the legitimacy of rationality in the first place. I guess all of us are like this to some degree, but some more than others for certain.
Not a bad suggestion. I know a little about the Ancient Greek philosophers, though nothing specific springs to mind.
I believe there are people like that, but how can we tell them apart from people who appropriately take into account their emotions in their decision-making and/or can’t explain how or why they’re rational, even though they really are?
I don’t 100% follow your comment, but I find the content of those links interesting. Care to expand on that thought at all?
Sometimes we might really, actually be over-analyzing things, and what our true goals are may be better discovered by paying more attention to what System 1 is informing us of. If we don’t figure this out for ourselves, it might be other rational people who tell us about this. If someone says:
how are we supposed to tell if what they’re saying is a:
someone trying to genuinely help us solve our problem(s) in a rational way
or
someone dismissing attempts at analyzing a problem at all?
It can only be one, or the other. Now, someone might not have read the Less Wrong, but that doesn’t preclude them from noticing when we really are over-analyzing a problem. When someone responds like this, how are we supposed to tell if they’re just strawmanning rationality, or really trying to help us achieve a more rational response?
This isn’t some rhetorical question for you. I’ve got the same concerns as you, and I’m not sure how to ask this particular question better. Is it a non-issue? Am I using confusing terms?
I like the exporation of how emotions interact with rationality that seems to be going on over there.
For me over-analysis would be where further analysis is unlikely to yield practically improved knowledge of options to solve the problem at hand. I’d probably treat this as quite separate from bad analysis or the information supplied by instinct and emotion. In a sense then emotions wouldn’t come to bear on the question of over-analysis generally. However, I’d heartily agree with the proposition that emotions are a good topic of exploration and study because they provide good option selection in certain situations and because knowledge of them might help control and account for emotionally based cognitive bias.
I guess the above would inform the question of whether the person you describe is rationally helping or just strawmanning. My sense is that in many cases the term is thrown around as a kind of defence against the mental discomfort that deep thought and the changing of ideas might bring, but perhaps I’m being too judgemental. Other times of course the person is actually identifying hand-wringing and inaction that we’re too oblivious to identify ourselves.
In terms of identification of true goals, I wonder if the contextuality and changability of emotion would render it a relevent but ultimately unreliable source of deriving true goals. For example, in a fierce conflict its fairly tempting to perceive your goals as fundamentally opposed or opposite to your opponents, but I wonder if that’s really a good position to form.
In the end though, people’s emotions are relevent in their perception of their goals, so I suspect we do have to address emotions in the case for rationality.
Does CFAR have its own discussion forum? I can’t see any so far? Do you know what CFAR thinks about the “winning” approach held by many LWers?
CFAR has its own private mailing list, which isn’t available to individuals who haven’t attended a CAR event before. As a CFAR alumnus, though, I can ask them your questions on your behalf. If I get a sufficient response, I can summarize their insight in a discussion post. I believe CFAR alumni are 40% active Less Wrong users, and 60% not. The base of CFAR, i.e. its staff, may have a substantially different perspective from its hundreds of workshop members that compose the broader community.
I think I’d be quite interested to know what % of CRAF people believe that rationality ought to include a component of “truthiness”. Anything that could help on that?
Let’s see how basic I can go with an argument for rationality without using anything that needs rationality to explain. First the basic form:
Rationality is an effective way of figuring out what is and isn’t true. Therefore rational people end up knowing the truth more often. Knowing the truth more often helps you make plans that work. Plans that work allow you to acquire money/status/power/men/women/happiness.
Now to dress it up in some rhetoric:
My friend, have you ever wished you could be the best you? The one who knows the best way to do everything, cuts to the truth of the matter, saves the world and then gets the girl/wins the man? That’s what Rationalism looks like, but first one must study the nature of truth in order to cleave reality along its weaknesses and then bend it to your whims. You can learn the art a step stronger than science, the way that achieves the seemingly impossible. You can build yourself into that best you, a step at a time, idea upon idea until you look down the mountain you have climbed and know you have won.
There, I feel vaguely oily. Points out of 10?
I think I’m broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I’d not reject your approach at this point.
Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn’t about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if “Truthiness” is only a tool that can be even be discarded when neccessary. I find that view curious.
I’m not sure they’re wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I’m not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I’m considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don’t want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they’re doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can’t test this hypothesis one way or the other without attempting to teach an average person rationalism and I don’t know if anyone has done that, nor how I would find out if they had.
So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it. In the arena of CEO compensation for example increased transparency leads to runaway competition between CEOs to have the highest salary, shafting everyone else. Sure, the truth is known but it has only made things worse. I’m fairly consequentialist like that.
Note that in this situation I’d still call for transparency on war crimes, torture and so on. The earlier the better. If a person knows that their actions will become known within 5 years and that it will effect them personally that somewhat constrains their actions against committing an atrocity. The people making the decisions obviously need accurate data to make said decisions with in all cases but the good or damage caused by the public availability of that data is another thing entirely. Living in a world where everyone was rationalists and the truth never caused problems would be nice, but that’s the should-world not the is-world.
It so happens that in this world we live using these brains we have, seeking the truth and not being satisfied with a lie or a distortion is an extremely effective way to gain power over the world. With our current hardware truth seeking may be the best way to understand enough to get things done without self-deception, but seeking the truth itself is incidental to the real goal.
Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
You may also want to read the tragedy of group selectionism.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Really? I was not aware of that trend in the field, maybe I should look into it.
Well, at least I understand you now.
Do we? I don’t think that’s the case. We know that being rational is quite hard and we don’t have a good guide to circumvent most cognitive biases.
You don’t have to go very far for that viewpoint. Robin Hanson did voice it lately.
To me that label rather rings alarm bells than any positive associations. Being a fan is something quite different than actually being rational.
Well thats semantics in a pretty casual post. Still the link is interesting, thanks. I wonder if anyone has offered a counter-argument along the lines of “rationality is a muscle, not a scarce resource”. But what do you do with someone who doesn’t even think that, but just thinks logic is something for nerds?
No, it’s substantial criticism. “Rationality fan” brings up in me the image of a person who aspires to be a Vulcan and who cares about labels instead of caring about outcomes.
The person who deconverted to theism and now makes atheism his new religion without really adopting good thinking habits.
Even the body builder who doesn’t consider logic to be very important and a subject for nerds might be interested in information from scientific studies about the effects of the supplements he takes.
Ok feel free to mentally replace my language with more sensible language. This was just a quick post in the open thread. Thanks for your substantial if somewhat contrarian comment.
There are also lots of guides on how to be fit. Can we find out and learn from what makes a person decide to pursue fitness?