I’ve been reading through the sequences, and one peeked my desire to post, so I created an account.
There are actually many things being discussed here that interest me.
I’m not sure I’m a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
I’m interested in personal identity, not “Quantum Mechanics and Personal Identity”, but where does “me” end. The sound bite is “Am I my hat?” or to be more verbose, is my hat an extension of myself, and thus a part of me.
Some would say “of course not”.
If you’re thinking that, then imagine I started beating you with my hat. Would you ask my hat to stop, or would you ask me? Where do we stop being “us” and start being “them.”
Is our hair part of us? What about when it’s cut? What if we weave it into a hat, and does it matter if we cut it first?
Let me be clear, I don’t really care how a particular definition of “me” would resolve hair clippings, I’m interested in what definitions people actually use.
Now, if I am to argue “Am I my hat?” from my world view, I would say that when I remove my hat, peices of my skin are on the hat, and fibres of the hat are in my hair. That is one point of view.
Now, say you created an exact copy of yourself, or me, or any other hat wearing person, only one of these two identical people was without headwear; would it still be the same person?
When do groups of people begin reacting in a dynamic that might be described as “us vs. them” mentality?
I think you need to dissolve the question “Am I my hat?” as well as the “us vs. them” issue.
See points 5, 10, 11, 12, 13, 14, 15, 16, 17 and 29 and then play a game of taboo.
Yes, almost exactly, though perhaps the question to dissolve is “what is me/self/I”.
“Am I my hat?” is just one, purposely bad, example of trying to do that.
Now, say you created an exact copy of yourself, or me, or any other hat wearing person, only one of these two identical people was without headwear; would it still be the same person?
“making an exact copy” or “exact copy sans hat” seems to require already knowing the answer
to “what is me/self/I”. I.e. If the definition of “me” includes a specific set of atoms,
then it’s not even possible, on the other hand if “me” is just a collection of thought processes,
then hats are not required. The precision needed is of course, dependent on the definition used, and ultimately the purpose. I’d say I’m most often thinking about it when the idea of making a copy comes up, as in, would that really be a copy.
Hope that helps :)
Indeed, playing Taboo, or rather, thinking about how I would play Taboo was surprisingly helpful.
I’ve recently tried thinking of myself as a pattern which likes reproducing imperfect copies of itself. This as a goal, (I would want more imperfect copies like me developed in the future) and not a bug of trying to produce exact copies of myself and failing at it.
Interesting, I never considered defining the me in terms of goals I’m attempting to accomplish, yet now that you mention it, it seems obvious. hindsight bias in action And it also brings to mind similar categories; who I know, who I’m friends with, social standing, ownership (gah, define one horribly fuzzy concept with yet another) … HungryTurtle mentioned the roles one plays.
Imagine a snowball that’s rolling down an infinite slope. As it descends, it picks up more snow, rocks, sticks, maybe some bugs, I don’t know. Maybe there are dry patches, too, and the snowball loses some snow. Maybe the snowball hits a boulder and loses half of its snow, and what remains is less than 10% original snow material. But it still can be said to be this snowball and not that snowball because its composition and history are unique to it—it can be identified by its past travels, its momentum, and the resulting trajectory. If this can be taken to be one’s life (an analogy that I hope was obvious), then the “I” that we refer to in our own lives isn’t even the whole snowball but merely the place where the snowball touches the ground.
What question is left to ask? there is some fibres that does not originate in your biochemistry stuck in yout hair, and there are some materials that originate in you stuck in that bunde of fibres that was previously resting on your cranium.
Why is it important to have a sharp definition of “self,” is it not to presume it has intristic meaning? What you refer to as “you” is an emergent system that has causes in your entire past light cone and repercussions in your entire future lightcone. There is a constant flux of matter and energy sorrounding the substrate that runs your consciousnness program. It is a continuous construct, there isn’t a line to be drawn at all.
I’m not sure I’m a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
Can you tell me more about this? this statement piqued my curiosity, but I don’t know enough about what you meant to ask anything specific, so I’m left with vauge questions like “Which lies?” and “Under which circumstances?”
To contribute by answering your question about a definition of “me”, I’ve recently tried thinking of myself as a pattern which likes reproducing imperfect copies of itself. This as a goal, (I would want more imperfect copies like me developed in the future) and not a bug of trying to produce exact copies of myself and failing at it.
It seems to be working so far, but really, I haven’t held the belief long enough for it to be hardened enough to be confident that it’s likely to work out in the future at all. I would currently not be very surprised if someone were to say “Actually, Michaelos, that’s a flawed way of thinking, and here is a link to why.”, and for my reaction to be “Yep, you’re correct, I missed something. Let me update that belief.” I suppose another way of putting it is that my current belief on that is fresh off the revolutionary and the apologist hasn’t had time to come up with a lot of defensive evidence.
I was starting to discuss the “us vs. them” issue as well, but I think MagnetoHydroDynamics’s link says it better than I do.
I’m not sure I’m a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
Can you tell me more about this? this statement piqued my curiosity, but I don’t know enough about what you meant to ask anything specific, so I’m left with vauge questions like “Which lies?” and “Under which circumstances?”
It’s a qualification on me, I’ve decided to join the discussion but I’m not sure about joining the group.
In other words, statements from me shouldn’t be viewed as rationalist statements since I’m (probably) not one.
let me expand;
Some lies are worth preserving, I think a rationalist would be in favor of keeping works of fiction for example.
We all know that Hamlet isn’t real.
These sorts of lies aren’t what I was talking about, since they are not destroyed by the truth.
Pointing out that Hamlet is fiction doesn’t diminish its value or effectiveness.
I was mainly thinking of “polite” lies that happen in everyday situations;
Your deformity doesn’t bother me.
Whatever you want, it’s all the same to me.
I’m sure my intentions toward you daughter are every bit as honorable as yours were toward her mother. Lies that you tell when it’s likely that the other person knows you are lying,
yet the sentiment behind them is such that we accept them without comment.
Sure, you’re freaked out by the one armed man, but you intend to do your best to act as if he were a normal person.
Both parties might even be aware that these lies are in fact, lies, but they both maintain the fiction.
The truth in these situations would be very revealing, in a world were everyone could instantly know the truth of all statements, these would be destroyed, instantly. We’d know the truth, yet I feel we would not be better off for it.
Then there are lies like “You’re a good boy.”
More of a wish than a truth. Yet by lying in this way, a parent is hoping to cause it to become true. A kind of self fulfilling prophecy (they hope).
Those are also lies I think are worth preserving.
Then there are the lies like “Santa Clause is a real person who will bring you presents if
you’re good.” I think destroying this kind of lie would be a good thing, but I’m not certain.
Would you kindly http://lesswrong.com/lw/nu/taboo_your_words and try posting again? I think that many individuals that describe themselves as rationalists would be in favor of “white lies” and I’m confused as to why you perceive this as a big difference between yourself and the group.
I assume you meant “more in the same vein” rather than simply “again”.
I perceive this as a difference between myself and the group because of the
large numbers of posts I’ve read that say rationalists should believe what is true,
and not believe what is false. The sentiment “that which can
be destroyed by truth should be” is repeated several times in several
different places. My memory is far from perfect, but I don’t recall any
arguments in favor of lies. You claim most rationalists in favor of “white lies”? I didn’t get that from my reading. But then I’ve only started in on the site, it will probably take me weeks
to absorb a significant part of it, so if someone can give me a pointer,
I’d be grateful.
I am much more inclined to go along with the “rationalists should win” line of thought.
I want to believe whatever is useful. For example, I believe that it’s impossible to
simulate intelligence without being intelligent. I’ve thought about it,
and I have reasons for that belief, but I can’t prove it’s true,
and I don’t care. “knowing” that it’s impossible to simulate intelligence
without being intelligent lets me look at the Chinese Room Argument and
conclude instantly that it’s wrong. It’s useful to believe that simulated
intelligence requires actual intelligence. If you want me to stop believing,
you need only show me the lie in the belief. But if you want me to evangelize
the truth, you’d need to show me the harm in the lie as well.
Santa Clause isn’t a white lie. Santa Clause is a massive conspiracy,
a gigantic cover up perpetrated by millions of adults. Lies on top of lies,
with corporations getting in on the action to sell products,
http://www.snopes.com/holidays/christmas/walmart.asp
a lie that when discovered leaves children shattered, their confidence
in the world shaken. And yet, it increases the amount of joy in the world
by a noticable amount. It brings families together, it teaches us to
be caring and giving. YMMV of course, but many would consider
christmas utilons > christmas evilons.
Most importantly, Santa persists. People make mistakes, but natural selection
removes really bad mistakes from the meme pool. As a rule of thumb, things
that people actually do are far more likely to be good for them than bad,
or at least, not harmful. I believe that’s a large part of why when theory says X,
and intuition says Y, we look very long and hard before accepting that theory as correct.
Our intuitions aren’t always correct, but they are usually correct.
There are some lies we believe intuitively. In the court of opinion, I believe
they should be presumed good until proven harmful.
Well, choosing to believe lies that are widely believed is certainly convenient, in that it does not put me at risk of conflict with my tribe, does not require me to put in the effort of believing one thing while asserting belief in another to avoid such conflict, and does not require me to put in the effort of carefully evaluating those beliefs.
Whether it’s useful—that is, whether believing a popular lie leaves me better off in the long run than failing to believe it—I’m not so sure. For example, can you clarify how your belief about the impossibility of simulating intelligence with an unintelligent system, supposing it’s false, leaves you better off than if you knew the truth?
O.k. suppose It’s false. Rather than wasting time disproving the CRA, I simply act on my “false” belief and reject it out of hand. Since the CRA is invalid for many other reasons as well, I’m still right. Win.
Generalizing;
Say I have an approximation that usually gives me the right answer, but on rare occasion gives a wrong one. If I work through a much more complicated method, I can arrive at the correct answer. I believe the approximation is correct. As long as; effort involved in complicated method > cost of being wrong I’m better off not using it. If I knew the truth, then I could still use the approximation, but I now have an extra step in my thinking. Instead of;
Ah, I see what you mean. Sure, agreed: as long as the false beliefs I arrive at using method A, which I would have avoided using method B, cost me less to hold than the additional costs of B, I do better with method A despite holding more false beliefs. And, sure, if the majority of false-belief-generating methods have this property, then it follows that I do well to adopt false-belief-generating methods as a matter of policy.
I don’t think that’s true of the world, but I also don’t think I can convince you of that if your experience of the world hasn’t already done so.
I’m reminded of a girl I dated in college who had a favorite card trick: she would ask someone to pick a card, then say “Is your card the King of Clubs?” She was usually wrong, of course, but she figured that when she was right it would be really impressive.
For example, I believe that it’s impossible to simulate intelligence without being intelligent. I’ve thought about it, and I have reasons for that belief, but I can’t prove it’s true, and I don’t care. “knowing” that it’s impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it’s wrong. It’s useful to believe that simulated intelligence requires actual intelligence.
That doesn’t strike me as being particularly useful. What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
Nothing. Can you actually prove it’s worse, or were you just asking a hypothetical?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Yes, the thing I’m not sure of (and note, I’m only unsure, not certain that it’s false) is the idea that believing a lie is always bad.
Clap your hands if you believe sounds ridiculous, but placebos really can help if you believe in them—we have proof.
But this is not a certain thing. That I can cherry pick examples where being “wrong” in ones beliefs has a greater benefit means very little. The bottom of the cliffs of philosophy are littered with the bones of exceptionally bad ideas. We are certainly worse off if we believe every lie, and there may well be no better way to determine good from bad than rationality. I’m just not certain that’s the case.
Can you actually prove [my epistemology is] worse [at figuring out what’s true], or were you just asking a hypothetical?
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
I’ve thought about it, and I have reasons for [believing that a non-intelligence cannot simulate intelligence], but I can’t prove it’s true, and I don’t care.
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelmingexternal evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand. The reason I like that particular belief is because it lets me reject false beliefs with greater ease. If holding a belief reduces my ability to do that, then is it of necessity, false?
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
I am also very interested in the question of personal identity. However I tend to phrase it as a question of “self” rather than identity. Within sociology and social psychology “identity” usually refers to a specific role a person dons in a particular setting. While Identity is the totality of roles they contain in their cognitive wardrobe, the process by which they create/delete identities, and the apparatus for choosing to take on an identity.
I also have much to say about hats, but I would to hear what you think of the above stated ideas before I continue.
So much interesting stuff.
I’ve been reading through the sequences, and one peeked my desire to post, so I created an account. There are actually many things being discussed here that interest me. I’m not sure I’m a rationalist though, as I believe there are some lies that should be maintained rather than destroyed.
I’m interested in personal identity, not “Quantum Mechanics and Personal Identity”, but where does “me” end.
The sound bite is “Am I my hat?” or to be more verbose, is my hat an extension of myself, and thus a part of me. Some would say “of course not”. If you’re thinking that, then imagine I started beating you with my hat.
Would you ask my hat to stop, or would you ask me?
Where do we stop being “us” and start being “them.” Is our hair part of us? What about when it’s cut? What if we weave it into a hat, and does it matter if we cut it first? Let me be clear, I don’t really care how a particular definition of “me” would resolve hair clippings, I’m interested in what definitions people actually use.
I think you need to dissolve the question “Am I my hat?” as well as the “us vs. them” issue.
See points 5, 10, 11, 12, 13, 14, 15, 16, 17 and 29 and then play a game of taboo.
Now, if I am to argue “Am I my hat?” from my world view, I would say that when I remove my hat, peices of my skin are on the hat, and fibres of the hat are in my hair. That is one point of view.
Now, say you created an exact copy of yourself, or me, or any other hat wearing person, only one of these two identical people was without headwear; would it still be the same person?
When do groups of people begin reacting in a dynamic that might be described as “us vs. them” mentality?
Hope that helps :)
Yes, almost exactly, though perhaps the question to dissolve is “what is me/self/I”. “Am I my hat?” is just one, purposely bad, example of trying to do that.
“making an exact copy” or “exact copy sans hat” seems to require already knowing the answer to “what is me/self/I”. I.e. If the definition of “me” includes a specific set of atoms, then it’s not even possible, on the other hand if “me” is just a collection of thought processes, then hats are not required. The precision needed is of course, dependent on the definition used, and ultimately the purpose. I’d say I’m most often thinking about it when the idea of making a copy comes up, as in, would that really be a copy.
Indeed, playing Taboo, or rather, thinking about how I would play Taboo was surprisingly helpful.
Interesting, I never considered defining the me in terms of goals I’m attempting to accomplish, yet now that you mention it, it seems obvious. hindsight bias in action
And it also brings to mind similar categories;
who I know,
who I’m friends with,
social standing,
ownership (gah, define one horribly fuzzy concept with yet another) …
HungryTurtle mentioned the roles one plays.
Imagine a snowball that’s rolling down an infinite slope. As it descends, it picks up more snow, rocks, sticks, maybe some bugs, I don’t know. Maybe there are dry patches, too, and the snowball loses some snow. Maybe the snowball hits a boulder and loses half of its snow, and what remains is less than 10% original snow material. But it still can be said to be this snowball and not that snowball because its composition and history are unique to it—it can be identified by its past travels, its momentum, and the resulting trajectory. If this can be taken to be one’s life (an analogy that I hope was obvious), then the “I” that we refer to in our own lives isn’t even the whole snowball but merely the place where the snowball touches the ground.
What question is left to ask? there is some fibres that does not originate in your biochemistry stuck in yout hair, and there are some materials that originate in you stuck in that bunde of fibres that was previously resting on your cranium.
Why is it important to have a sharp definition of “self,” is it not to presume it has intristic meaning? What you refer to as “you” is an emergent system that has causes in your entire past light cone and repercussions in your entire future lightcone. There is a constant flux of matter and energy sorrounding the substrate that runs your consciousnness program. It is a continuous construct, there isn’t a line to be drawn at all.
Can you tell me more about this? this statement piqued my curiosity, but I don’t know enough about what you meant to ask anything specific, so I’m left with vauge questions like “Which lies?” and “Under which circumstances?”
To contribute by answering your question about a definition of “me”, I’ve recently tried thinking of myself as a pattern which likes reproducing imperfect copies of itself. This as a goal, (I would want more imperfect copies like me developed in the future) and not a bug of trying to produce exact copies of myself and failing at it.
It seems to be working so far, but really, I haven’t held the belief long enough for it to be hardened enough to be confident that it’s likely to work out in the future at all. I would currently not be very surprised if someone were to say “Actually, Michaelos, that’s a flawed way of thinking, and here is a link to why.”, and for my reaction to be “Yep, you’re correct, I missed something. Let me update that belief.” I suppose another way of putting it is that my current belief on that is fresh off the revolutionary and the apologist hasn’t had time to come up with a lot of defensive evidence.
I was starting to discuss the “us vs. them” issue as well, but I think MagnetoHydroDynamics’s link says it better than I do.
It’s a qualification on me, I’ve decided to join the discussion but I’m not sure about joining the group. In other words, statements from me shouldn’t be viewed as rationalist statements since I’m (probably) not one. let me expand;
Some lies are worth preserving, I think a rationalist would be in favor of keeping works of fiction for example. We all know that Hamlet isn’t real. These sorts of lies aren’t what I was talking about, since they are not destroyed by the truth. Pointing out that Hamlet is fiction doesn’t diminish its value or effectiveness.
I was mainly thinking of “polite” lies that happen in everyday situations;
Your deformity doesn’t bother me.
Whatever you want, it’s all the same to me.
I’m sure my intentions toward you daughter are every bit as honorable as yours were toward her mother.
Lies that you tell when it’s likely that the other person knows you are lying, yet the sentiment behind them is such that we accept them without comment. Sure, you’re freaked out by the one armed man, but you intend to do your best to act as if he were a normal person. Both parties might even be aware that these lies are in fact, lies, but they both maintain the fiction.
The truth in these situations would be very revealing, in a world were everyone could instantly know the truth of all statements, these would be destroyed, instantly. We’d know the truth, yet I feel we would not be better off for it.
Then there are lies like “You’re a good boy.” More of a wish than a truth. Yet by lying in this way, a parent is hoping to cause it to become true.
A kind of self fulfilling prophecy (they hope). Those are also lies I think are worth preserving.
Then there are the lies like “Santa Clause is a real person who will bring you presents if you’re good.” I think destroying this kind of lie would be a good thing, but I’m not certain.
Would you kindly http://lesswrong.com/lw/nu/taboo_your_words and try posting again? I think that many individuals that describe themselves as rationalists would be in favor of “white lies” and I’m confused as to why you perceive this as a big difference between yourself and the group.
I assume you meant “more in the same vein” rather than simply “again”.
I perceive this as a difference between myself and the group because of the large numbers of posts I’ve read that say rationalists should believe what is true, and not believe what is false. The sentiment “that which can be destroyed by truth should be” is repeated several times in several different places. My memory is far from perfect, but I don’t recall any arguments in favor of lies. You claim most rationalists in favor of “white lies”?
I didn’t get that from my reading.
But then I’ve only started in on the site, it will probably take me weeks to absorb a significant part of it, so if someone can give me a pointer, I’d be grateful.
I am much more inclined to go along with the “rationalists should win” line of thought. I want to believe whatever is useful. For example, I believe that it’s impossible to simulate intelligence without being intelligent. I’ve thought about it, and I have reasons for that belief, but I can’t prove it’s true, and I don’t care. “knowing” that it’s impossible to simulate intelligence without being intelligent lets me look at the Chinese Room Argument and conclude instantly that it’s wrong. It’s useful to believe that simulated intelligence requires actual intelligence. If you want me to stop believing, you need only show me the lie in the belief. But if you want me to evangelize the truth, you’d need to show me the harm in the lie as well.
Santa Clause isn’t a white lie. Santa Clause is a massive conspiracy, a gigantic cover up perpetrated by millions of adults. Lies on top of lies, with corporations getting in on the action to sell products, http://www.snopes.com/holidays/christmas/walmart.asp a lie that when discovered leaves children shattered, their confidence in the world shaken. And yet, it increases the amount of joy in the world by a noticable amount. It brings families together, it teaches us to be caring and giving. YMMV of course, but many would consider christmas utilons > christmas evilons.
Most importantly, Santa persists. People make mistakes, but natural selection removes really bad mistakes from the meme pool. As a rule of thumb, things that people actually do are far more likely to be good for them than bad, or at least, not harmful. I believe that’s a large part of why when theory says X, and intuition says Y, we look very long and hard before accepting that theory as correct. Our intuitions aren’t always correct, but they are usually correct. There are some lies we believe intuitively. In the court of opinion, I believe they should be presumed good until proven harmful.
Well, choosing to believe lies that are widely believed is certainly convenient, in that it does not put me at risk of conflict with my tribe, does not require me to put in the effort of believing one thing while asserting belief in another to avoid such conflict, and does not require me to put in the effort of carefully evaluating those beliefs.
Whether it’s useful—that is, whether believing a popular lie leaves me better off in the long run than failing to believe it—I’m not so sure. For example, can you clarify how your belief about the impossibility of simulating intelligence with an unintelligent system, supposing it’s false, leaves you better off than if you knew the truth?
O.k. suppose It’s false. Rather than wasting time disproving the CRA, I simply act on my “false” belief and reject it out of hand. Since the CRA is invalid for many other reasons as well, I’m still right. Win.
Generalizing; Say I have an approximation that usually gives me the right answer, but on rare occasion gives a wrong one. If I work through a much more complicated method, I can arrive at the correct answer. I believe the approximation is correct. As long as;
effort involved in complicated method > cost of being wrong
I’m better off not using it. If I knew the truth, then I could still use the approximation, but I now have an extra step in my thinking. Instead of;
Approximate.
Reject.
it’s
Approximate.
Ignore possibility of being wrong.
Reject.
Ah, I see what you mean. Sure, agreed: as long as the false beliefs I arrive at using method A, which I would have avoided using method B, cost me less to hold than the additional costs of B, I do better with method A despite holding more false beliefs. And, sure, if the majority of false-belief-generating methods have this property, then it follows that I do well to adopt false-belief-generating methods as a matter of policy.
I don’t think that’s true of the world, but I also don’t think I can convince you of that if your experience of the world hasn’t already done so.
I’m reminded of a girl I dated in college who had a favorite card trick: she would ask someone to pick a card, then say “Is your card the King of Clubs?” She was usually wrong, of course, but she figured that when she was right it would be really impressive.
That doesn’t strike me as being particularly useful. What’s so great about the ability to (justify to yourself that it’s okay to) skip over the Chinese Room Argument that it’s worth making your overall epistemology provably worse at figuring out what’s true?
More generally, there’s a big difference between lying to yourself and lying to other people. Lying to others is potentially useful when their actions, if they knew the facts, would contradict your goals. It’s harder to come up with a case where your actions would contradict your own goals if and only if you’re better informed. (Though there are some possible cases, i.e. keeping yourself more optimistic and thus more productive by shielding yourself from unhappy facts).
Nothing.
Can you actually prove it’s worse, or were you just asking a hypothetical?
Yes, the thing I’m not sure of (and note, I’m only unsure, not certain that it’s false) is the idea that believing a lie is always bad.
Clap your hands if you believe sounds ridiculous, but placebos really can help if you believe in them—we have proof.
But this is not a certain thing. That I can cherry pick examples where being “wrong” in ones beliefs has a greater benefit means very little. The bottom of the cliffs of philosophy are littered with the bones of exceptionally bad ideas. We are certainly worse off if we believe every lie, and there may well be no better way to determine good from bad than rationality. I’m just not certain that’s the case.
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelming external evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand.
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
If holding a belief reduces my ability to do that, then is it of necessity, false?
Wouldn’t that mean that my belief must be true?
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
I am also very interested in the question of personal identity. However I tend to phrase it as a question of “self” rather than identity. Within sociology and social psychology “identity” usually refers to a specific role a person dons in a particular setting. While Identity is the totality of roles they contain in their cognitive wardrobe, the process by which they create/delete identities, and the apparatus for choosing to take on an identity.
I also have much to say about hats, but I would to hear what you think of the above stated ideas before I continue.