Would you kindly http://lesswrong.com/lw/nu/taboo_your_words and try posting again? I think that many individuals that describe themselves as rationalists would be in favor of “white lies” and I’m confused as to why you perceive this as a big difference between yourself and the group.
I assume you meant “more in the same vein” rather than simply “again”.
I perceive this as a difference between myself and the group because of the
large numbers of posts I’ve read that say rationalists should believe what is true,
and not believe what is false. The sentiment “that which can
be destroyed by truth should be” is repeated several times in several
different places. My memory is far from perfect, but I don’t recall any
arguments in favor of lies. You claim most rationalists in favor of “white lies”? I didn’t get that from my reading. But then I’ve only started in on the site, it will probably take me weeks
to absorb a significant part of it, so if someone can give me a pointer,
I’d be grateful.
I am much more inclined to go along with the “rationalists should win” line of thought.
I want to believe whatever is useful. For example, I believe that it’s impossible to
simulate intelligence without being intelligent. I’ve thought about it,
and I have reasons for that belief, but I can’t prove it’s true,
and I don’t care. “knowing” that it’s impossible to simulate intelligence
without being intelligent lets me look at the Chinese Room Argument and
conclude instantly that it’s wrong. It’s useful to believe that simulated
intelligence requires actual intelligence. If you want me to stop believing,
you need only show me the lie in the belief. But if you want me to evangelize
the truth, you’d need to show me the harm in the lie as well.
Santa Clause isn’t a white lie. Santa Clause is a massive conspiracy,
a gigantic cover up perpetrated by millions of adults. Lies on top of lies,
with corporations getting in on the action to sell products,
http://www.snopes.com/holidays/christmas/walmart.asp
a lie that when discovered leaves children shattered, their confidence
in the world shaken. And yet, it increases the amount of joy in the world
by a noticable amount. It brings families together, it teaches us to
be caring and giving. YMMV of course, but many would consider
christmas utilons > christmas evilons.
Most importantly, Santa persists. People make mistakes, but natural selection
removes really bad mistakes from the meme pool. As a rule of thumb, things
that people actually do are far more likely to be good for them than bad,
or at least, not harmful. I believe that’s a large part of why when theory says X,
and intuition says Y, we look very long and hard before accepting that theory as correct.
Our intuitions aren’t always correct, but they are usually correct.
There are some lies we believe intuitively. In the court of opinion, I believe
they should be presumed good until proven harmful.
Well, choosing to believe lies that are widely believed is certainly convenient, in that it does not put me at risk of conflict with my tribe, does not require me to put in the effort of believing one thing while asserting belief in another to avoid such conflict, and does not require me to put in the effort of carefully evaluating those beliefs.
Whether it’s useful—that is, whether believing a popular lie leaves me better off in the long run than failing to believe it—I’m not so sure. For example, can you clarify how your belief about the impossibility of simulating intelligence with an unintelligent system, supposing it’s false, leaves you better off than if you knew the truth?
O.k. suppose It’s false. Rather than wasting time disproving the CRA, I simply act on my “false” belief and reject it out of hand. Since the CRA is invalid for many other reasons as well, I’m still right. Win.
Generalizing;
Say I have an approximation that usually gives me the right answer, but on rare occasion gives a wrong one. If I work through a much more complicated method, I can arrive at the correct answer. I believe the approximation is correct. As long as; effort involved in complicated method > cost of being wrong I’m better off not using it. If I knew the truth, then I could still use the approximation, but I now have an extra step in my thinking. Instead of;
Ah, I see what you mean. Sure, agreed: as long as the false beliefs I arrive at using method A, which I would have avoided using method B, cost me less to hold than the additional costs of B, I do better with method A despite holding more false beliefs. And, sure, if the majority of false-belief-generating methods have this property, then it follows that I do well to adopt false-belief-generating methods as a matter of policy.
I don’t think that’s true of the world, but I also don’t think I can convince you of that if your experience of the world hasn’t already done so.
I’m reminded of a girl I dated in college who had a favorite card trick: she would ask someone to pick a card, then say “Is your card the King of Clubs?” She was usually wrong, of course, but she figured that when she was right it would be really impressive.
For example, I believe that it’s impossible to simulate intelligence without being intelligent. I’ve thought about it, and I have reasons for that belief, but I can’t prove it’s true, and I don’t care. “knowing” that it’s impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it’s wrong. It’s useful to believe that simulated intelligence requires actual intelligence.
That doesn’t strike me as being particularly useful. What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
Nothing. Can you actually prove it’s worse, or were you just asking a hypothetical?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Yes, the thing I’m not sure of (and note, I’m only unsure, not certain that it’s false) is the idea that believing a lie is always bad.
Clap your hands if you believe sounds ridiculous, but placebos really can help if you believe in them—we have proof.
But this is not a certain thing. That I can cherry pick examples where being “wrong” in ones beliefs has a greater benefit means very little. The bottom of the cliffs of philosophy are littered with the bones of exceptionally bad ideas. We are certainly worse off if we believe every lie, and there may well be no better way to determine good from bad than rationality. I’m just not certain that’s the case.
Can you actually prove [my epistemology is] worse [at figuring out what’s true], or were you just asking a hypothetical?
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
I’ve thought about it, and I have reasons for [believing that a non-intelligence cannot simulate intelligence], but I can’t prove it’s true, and I don’t care.
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelmingexternal evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand. The reason I like that particular belief is because it lets me reject false beliefs with greater ease. If holding a belief reduces my ability to do that, then is it of necessity, false?
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
Would you kindly http://lesswrong.com/lw/nu/taboo_your_words and try posting again? I think that many individuals that describe themselves as rationalists would be in favor of “white lies” and I’m confused as to why you perceive this as a big difference between yourself and the group.
I assume you meant “more in the same vein” rather than simply “again”.
I perceive this as a difference between myself and the group because of the large numbers of posts I’ve read that say rationalists should believe what is true, and not believe what is false. The sentiment “that which can be destroyed by truth should be” is repeated several times in several different places. My memory is far from perfect, but I don’t recall any arguments in favor of lies. You claim most rationalists in favor of “white lies”?
I didn’t get that from my reading.
But then I’ve only started in on the site, it will probably take me weeks to absorb a significant part of it, so if someone can give me a pointer, I’d be grateful.
I am much more inclined to go along with the “rationalists should win” line of thought. I want to believe whatever is useful. For example, I believe that it’s impossible to simulate intelligence without being intelligent. I’ve thought about it, and I have reasons for that belief, but I can’t prove it’s true, and I don’t care. “knowing” that it’s impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it’s wrong. It’s useful to believe that simulated intelligence requires actual intelligence. If you want me to stop believing, you need only show me the lie in the belief. But if you want me to evangelize the truth, you’d need to show me the harm in the lie as well.
Santa Clause isn’t a white lie. Santa Clause is a massive conspiracy, a gigantic cover up perpetrated by millions of adults. Lies on top of lies, with corporations getting in on the action to sell products, http://www.snopes.com/holidays/christmas/walmart.asp a lie that when discovered leaves children shattered, their confidence in the world shaken. And yet, it increases the amount of joy in the world by a noticable amount. It brings families together, it teaches us to be caring and giving. YMMV of course, but many would consider christmas utilons > christmas evilons.
Most importantly, Santa persists. People make mistakes, but natural selection removes really bad mistakes from the meme pool. As a rule of thumb, things that people actually do are far more likely to be good for them than bad, or at least, not harmful. I believe that’s a large part of why when theory says X, and intuition says Y, we look very long and hard before accepting that theory as correct. Our intuitions aren’t always correct, but they are usually correct. There are some lies we believe intuitively. In the court of opinion, I believe they should be presumed good until proven harmful.
Well, choosing to believe lies that are widely believed is certainly convenient, in that it does not put me at risk of conflict with my tribe, does not require me to put in the effort of believing one thing while asserting belief in another to avoid such conflict, and does not require me to put in the effort of carefully evaluating those beliefs.
Whether it’s useful—that is, whether believing a popular lie leaves me better off in the long run than failing to believe it—I’m not so sure. For example, can you clarify how your belief about the impossibility of simulating intelligence with an unintelligent system, supposing it’s false, leaves you better off than if you knew the truth?
O.k. suppose It’s false. Rather than wasting time disproving the CRA, I simply act on my “false” belief and reject it out of hand. Since the CRA is invalid for many other reasons as well, I’m still right. Win.
Generalizing; Say I have an approximation that usually gives me the right answer, but on rare occasion gives a wrong one. If I work through a much more complicated method, I can arrive at the correct answer. I believe the approximation is correct. As long as;
effort involved in complicated method > cost of being wrong
I’m better off not using it. If I knew the truth, then I could still use the approximation, but I now have an extra step in my thinking. Instead of;
Approximate.
Reject.
it’s
Approximate.
Ignore possibility of being wrong.
Reject.
Ah, I see what you mean. Sure, agreed: as long as the false beliefs I arrive at using method A, which I would have avoided using method B, cost me less to hold than the additional costs of B, I do better with method A despite holding more false beliefs. And, sure, if the majority of false-belief-generating methods have this property, then it follows that I do well to adopt false-belief-generating methods as a matter of policy.
I don’t think that’s true of the world, but I also don’t think I can convince you of that if your experience of the world hasn’t already done so.
I’m reminded of a girl I dated in college who had a favorite card trick: she would ask someone to pick a card, then say “Is your card the King of Clubs?” She was usually wrong, of course, but she figured that when she was right it would be really impressive.
That doesn’t strike me as being particularly useful. What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Nothing.
Can you actually prove it’s worse, or were you just asking a hypothetical?
Yes, the thing I’m not sure of (and note, I’m only unsure, not certain that it’s false) is the idea that believing a lie is always bad.
Clap your hands if you believe sounds ridiculous, but placebos really can help if you believe in them—we have proof.
But this is not a certain thing. That I can cherry pick examples where being “wrong” in ones beliefs has a greater benefit means very little. The bottom of the cliffs of philosophy are littered with the bones of exceptionally bad ideas. We are certainly worse off if we believe every lie, and there may well be no better way to determine good from bad than rationality. I’m just not certain that’s the case.
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelming external evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand.
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
If holding a belief reduces my ability to do that, then is it of necessity, false?
Wouldn’t that mean that my belief must be true?
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?