Let’s see how basic I can go with an argument for rationality without using anything that needs rationality to explain. First the basic form:
Rationality is an effective way of figuring out what is and isn’t true. Therefore rational people end up knowing the truth more often. Knowing the truth more often helps you make plans that work. Plans that work allow you to acquire money/status/power/men/women/happiness.
Now to dress it up in some rhetoric:
My friend, have you ever wished you could be the best you? The one who knows the best way to do everything, cuts to the truth of the matter, saves the world and then gets the girl/wins the man? That’s what Rationalism looks like, but first one must study the nature of truth in order to cleave reality along its weaknesses and then bend it to your whims. You can learn the art a step stronger than science, the way that achieves the seemingly impossible. You can build yourself into that best you, a step at a time, idea upon idea until you look down the mountain you have climbed and know you have won.
I think I’m broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I’d not reject your approach at this point.
Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn’t about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if “Truthiness” is only a tool that can be even be discarded when neccessary. I find that view curious.
I’m not sure they’re wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I’m not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I’m considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don’t want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they’re doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can’t test this hypothesis one way or the other without attempting to teach an average person rationalism and I don’t know if anyone has done that, nor how I would find out if they had.
So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it. In the arena of CEO compensation for example increased transparency leads to runaway competition between CEOs to have the highest salary, shafting everyone else. Sure, the truth is known but it has only made things worse. I’m fairly consequentialist like that.
Note that in this situation I’d still call for transparency on war crimes, torture and so on. The earlier the better. If a person knows that their actions will become known within 5 years and that it will effect them personally that somewhat constrains their actions against committing an atrocity. The people making the decisions obviously need accurate data to make said decisions with in all cases but the good or damage caused by the public availability of that data is another thing entirely. Living in a world where everyone was rationalists and the truth never caused problems would be nice, but that’s the should-world not the is-world.
It so happens that in this world we live using these brains we have, seeking the truth and not being satisfied with a lie or a distortion is an extremely effective way to gain power over the world. With our current hardware truth seeking may be the best way to understand enough to get things done without self-deception, but seeking the truth itself is incidental to the real goal.
Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic:
http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Let’s see how basic I can go with an argument for rationality without using anything that needs rationality to explain. First the basic form:
Rationality is an effective way of figuring out what is and isn’t true. Therefore rational people end up knowing the truth more often. Knowing the truth more often helps you make plans that work. Plans that work allow you to acquire money/status/power/men/women/happiness.
Now to dress it up in some rhetoric:
My friend, have you ever wished you could be the best you? The one who knows the best way to do everything, cuts to the truth of the matter, saves the world and then gets the girl/wins the man? That’s what Rationalism looks like, but first one must study the nature of truth in order to cleave reality along its weaknesses and then bend it to your whims. You can learn the art a step stronger than science, the way that achieves the seemingly impossible. You can build yourself into that best you, a step at a time, idea upon idea until you look down the mountain you have climbed and know you have won.
There, I feel vaguely oily. Points out of 10?
I think I’m broadly supportive of your approach. The only problem I can see is that most people think its better to try to do stuff, as opposed to getting better at doing stuff. Rationality is a very generalised and very long-term approach and payoff. Still I’d not reject your approach at this point.
Another issue I find interesting is that several people have commented recently on LW that (instrumental) rationality isn’t about knowing the truth but simply achieving goals most effectively. They claim this is the focus of most LWers too. As if “Truthiness” is only a tool that can be even be discarded when neccessary. I find that view curious.
I’m not sure they’re wrong to be honest (assuming an average cross section of people). Rationality is an extremely long term approach and payoff, I am not sure it would even work for the majority of people and if it does I’m not sure if it reaches diminishing returns compared to other strategies. The introductory text (sequences) is 9,000 pages long and the supplementary texts (kahneman, ariely ect) take it up to 11,000. I’m considered a very fast reader and it took me 3 unemployed months of constant reading to get through. For a good period of that time I was getting a negative return, I became a worse person. It took a month after that to end up net positive. I don’t want to harp on about unfair inherent advantages, but I just took a look at the survey results from last year and the lowest IQ was 124.6. This stuff could be totally ineffective for average people and we would have no way of knowing. Simply being told the best path for self improvement or effective action by someone who was a rationalist or just someone who knows what they’re doing, a normal expert in whatever field may well be more optimal for a great many people. Essentially data-driven life coaching. I can’t test this hypothesis one way or the other without attempting to teach an average person rationalism and I don’t know if anyone has done that, nor how I would find out if they had.
So far as instrumental rationality not being in core about truth, to be honest I broadly agree with them. There may be a term in my utility function for truth but it is not a large term, not nearly so important as the term for helping humanity or the one for interesting diversions. I seek truth not as an end in itself, but because it is so damn useful for achieving other things I care about. If I were in a world where my ignorance would save a life with no downside while my knowledge had no longterm benefit then I would stay ignorant. If my ignorance was a large enough net benefit to me and others, I would keep it. In the arena of CEO compensation for example increased transparency leads to runaway competition between CEOs to have the highest salary, shafting everyone else. Sure, the truth is known but it has only made things worse. I’m fairly consequentialist like that.
Note that in this situation I’d still call for transparency on war crimes, torture and so on. The earlier the better. If a person knows that their actions will become known within 5 years and that it will effect them personally that somewhat constrains their actions against committing an atrocity. The people making the decisions obviously need accurate data to make said decisions with in all cases but the good or damage caused by the public availability of that data is another thing entirely. Living in a world where everyone was rationalists and the truth never caused problems would be nice, but that’s the should-world not the is-world.
It so happens that in this world we live using these brains we have, seeking the truth and not being satisfied with a lie or a distortion is an extremely effective way to gain power over the world. With our current hardware truth seeking may be the best way to understand enough to get things done without self-deception, but seeking the truth itself is incidental to the real goal.
Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
You may also want to read the tragedy of group selectionism.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Really? I was not aware of that trend in the field, maybe I should look into it.
Well, at least I understand you now.