Can you actually prove [my epistemology is] worse [at figuring out what’s true], or were you just asking a hypothetical?
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
I’ve thought about it, and I have reasons for [believing that a non-intelligence cannot simulate intelligence], but I can’t prove it’s true, and I don’t care.
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelmingexternal evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand. The reason I like that particular belief is because it lets me reject false beliefs with greater ease. If holding a belief reduces my ability to do that, then is it of necessity, false?
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?
No, I can prove that, provided that I’m understanding correctly what approach you’re using. You said earlier:
By “don’t care” I take it that you mean that you will not update your confidence level in that belief if new evidence comes in. The closer you get to a Bayesian ideal, the better you’ll be at getting the highest increases in map accuracy out of a given amount of input. By that criteria, updating on evidence (no matter how roughly) is always closer than ignoring it, provided that you can at least avoid misinterpreting evidence so much that you update in the wrong direction.
That’s the epistemological angle. But you also run into trouble looking at it instrumentally:
In order for you to most effectively update your beliefs in such a way as to have the beliefs that give you the highest expected utility, you must have accurate levels of confidence for those beliefs somewhere! It might be okay to disbelieve that nuclear war is possible if the thought depresses you and an actual nuclear war is only 0.1% likely; however, if it’s 90% likely and you assign any reasonable amount of value to being alive even if depressed, then you’re better off believing the truth because you’ll go find a deep underground shelter to be depressed in instead of being happily vaporized on the surface!
Having two separate sets of beliefs like this is just asking to walk into lots of other well-known problematic biases; most notably, you are much more likely in practice to simply pick between your true-belief set and your instrumental-belief set depending on which seems most emotionally and socially appropriate moment-to-moment, rather than (as would be required for this hack to be generally useful) always using your instrumental-beliefs for decision-making and emotional welfare but never for processing new evidence.
All that said, I agree with your overall premise: there is nothing requiring that true belief always be better than false belief for human welfare. However, it is better much more often than not. And as I described above, maintaining two different sets of beliefs for different purposes is more apt to trigger standard human failure modes than just having a single set and disposing of cognitive dissonance as much as possible. Given all that, I argue that we are best off pursuing a general strategy of truth-seeking in our beliefs except when there is overwhelming external evidence for particular beliefs being bad; and even then, it’s probably a better strategy overall to simply avoid finding out about such things somehow than to learn them and try to deceive yourself afterwards.
I’m not sure I understand.
The reason I like that particular belief is because it lets me reject false beliefs with greater ease.
If holding a belief reduces my ability to do that, then is it of necessity, false?
Wouldn’t that mean that my belief must be true?
How do you know those propositions being rejected are false?
If it’s because the first belief leads to that conclusion, then that’s circular logic.
If it’s because you have additional evidence that the rejected propositions are false, and that their falseness implies the first belief’s trueness, then you have a simple straightforward dependency, and all this business about instrumental benefits is just a distraction. However, you still have to be careful not to let your evidence flow backwards, because that would be circular logic.
I don’t know the propositions being rejected are false anymore than I know that the original proposition is true.
But I do know that in every case that I went through the long a laborious process of analyzing the proposition, it’s worked out the same as if I just used the short cut of assuming my original proposition is true. It’s not just some random belief, it’s field tested. In point of fact, it’s been field test so much, that I now know I would continue to act as if it were true even if evidence were presented that it was false. I would assume that it’s more likely that the new evidence was flawed until the preponderance of the evidence was just overwhelming. Or somebody supplied a new test that was nearly as good, and provably correct.
That sounds pretty good then. It’s not quite at a Bayesian ideal; when you run across evidence that weakly contradicts your existing hypothesis, that should result in a weak reduction in confidence, rather than zero reduction. But overall, requiring a whole lot of contradictory evidence in order to overturn a belief that was originally formed based on a lot of confirming evidence is right on the money.
Actually, though, I wanted to ask you another question: what specific analyses did you do to arrive at these conclusions?