Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic:
http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Thanks for the interesting comments. I’ve not been on LW for wrong and so far I’m being selective about which sequences I’m reading. I’ll see how that works out (or will I? lol).
I think my concern on the truthiness part of what you say is that there is an assumption that we can accurately predict the consequences of a non-truth belief decision. I think that’s rarely the case. We are rarely given personal corrective evidence though, because its the nature of a self-deception that we’re oblivious that we’ve screwed up. Applying a general rule of truthiness is a far more effective approach imo.
Agreed, a general rule of truthiness is definitely a very effective approach and probably the most effective approach, especially once you’ve started down the path. So far as I can tell stopping halfway through is… risky in a way that never having started is not. I only recently finished the sequences myself (apart from the last half of QM). At the time of starting I thought it was essentially the age old trade off between knowledge and happy ignorance, but it appears at some point of reading the stuff I hit critical mass and now I’m starting to see how I could use knowledge to have more happiness than if I was ignorant, which I wasn’t expecting at all. Which sequences are you starting with?
By the way, I just noticed I screwed up on the survey results: I read the standard deviation as the range. IQ should be mean 138.2 with SD 13.6, implying 95% are above 111 and 99% above 103.5. It changes my first argument a little, but I think the main core is still sound.
Well I’ve done Map & Territory and have skimmed through random selections of other things. Pretty early days I know! So far I’ve not run into anything particularly objectionable for me or conflicting with any of the decent philosophy I’ve read. My main concern is this truth as incidental thing. I just posted on this topic: http://lesswrong.com/lw/l6z/the_truth_and_instrumental_rationality/
Ah, I think you may have gotten the wrong idea when I said truth was incidental, that a thing is incidental does not stop it from being useful and a good idea, it is just not a goal in and of itself. Fortunately, no-one here is actually suggesting active self-deception as a viable strategy. I would suggest reading Terminal values and Instrumental Values. Truth seeking is an instrumental value, in that it is useful to reach the terminal values of whatever your actual goals are. So far as I can tell, we actually agree on the subject for all relevant purposes.
You may also want to read the tragedy of group selectionism.
Thanks for the group selection link. Unfortunately I’d have to say, to the best of my non-expert judgement, that the current trends in the field disagrees somewhat with Eliezer in this regard. The 60s group selection was definitely overstated and problematic, but quite a few biologists feel that this resulted in the idea being ruled out entirely in a kind of overreaction to the original mistakes. Even Dawkins, who’s traditionally dismissed group selection, acknowledged it may play more of a role than he previously thought. So its been refined and is making a bit of a come-back, despite opposition. Of course, only a few point to it as the central explanation for altruism, but the result of my own investigation makes me think that the biological component of altruism is best explained by a mixed model of group selection, kin selection and reciprocation. We additionally haven’t really got a reliable map as to nature/nuture of altruism either, so I suspect the field will “evolve” further.
I’ve read the values argument. I acknowledge that no one is claiming the truth is BAD exactly, but my suggestion here is that unless we deliberately and explicitly weigh it into our thought process, even when it has no apparent utlity, we run into unforeseeable errors that compound upon eachother without our awareness of them doing so. Crudely put, lazy approaches to the truth come unstuck, but we never realise it. I take it my post has failed to communicate that aspect of the argument clearly? :-(
Oh I add that I agree we agree in most regards on the topic.
Really? I was not aware of that trend in the field, maybe I should look into it.
Well, at least I understand you now.