But you don’t know which algorithm to follow to make yourself believe some arbitrary thing.
Actually, I do. I already said that believing something is basically the same as treating it as a fact, and I know how to treat something as a fact. Again, I might not want to treat it as a fact, but that is no different from not wanting to go to the store: the algorithm is equally clear.
I see absolutely no practical problems in labeling my beliefs as “pretty sure it’s true”, “likely true”, “more likely than not”, etc. I do NOT prefer to ‘say and think, “this is how it is.”’
Your comment history contains many flat out factual claims without any such qualification. Thus your revealed preferences show that you agree with me.
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
I agree that there is such a gradient, but that is quite different from a black and white division into people who care about truth and people who don’t, as you suggested before. This is practically parallel to the discussion of the binary belief idea: if you don’t like the binary beliefs, you should also admit that there is no binary division of people who care about truth and people who don’t.
You explicitly said:
Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it.
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
First of all, that was a toy model and not a representation of my personal opinions, which is why it started out, “But suppose you also think there is an 80% chance...” If you are asking about my real position on that gradient, I am pretty far into the extreme end of caring about truth. Far enough that I refuse to pronounce the falsehood that I don’t care about anything else.
Second, it is unfair even to the toy model to say that it gives zero utility to believing what is true. It assigns a utility of 1 to believing a truth, and therefore 0.8 to a probability of 80% of believing a truth. But the total utility of believing something with a probability of 80% is less, because that probability implies a 20% chance of believing something false, which has negative utility. Finally, in the model, the person adds in utility or disutility from other factors, and ends up with a overall negative utility for believing something that has an 80% chance of being true. I.e. not “truth” and not “zero utility.” In particular, to the degree that it is true or probably true, that adds utility. Believing a falsehood with the same consequences, in this model, would have even lower utility.
believing something is basically the same as treating it as a fact, and I know how to treat something as a fact
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Your comment history contains many flat out factual claims without any such qualification
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
you should also admit that there is no binary division of people who care about truth and people who don’t.
Sure, I’ll admit this :-)
It assigns a utility of 1 to believing a truth
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.
Actually, I do. I already said that believing something is basically the same as treating it as a fact, and I know how to treat something as a fact. Again, I might not want to treat it as a fact, but that is no different from not wanting to go to the store: the algorithm is equally clear.
Your comment history contains many flat out factual claims without any such qualification. Thus your revealed preferences show that you agree with me.
I agree that there is such a gradient, but that is quite different from a black and white division into people who care about truth and people who don’t, as you suggested before. This is practically parallel to the discussion of the binary belief idea: if you don’t like the binary beliefs, you should also admit that there is no binary division of people who care about truth and people who don’t.
First of all, that was a toy model and not a representation of my personal opinions, which is why it started out, “But suppose you also think there is an 80% chance...” If you are asking about my real position on that gradient, I am pretty far into the extreme end of caring about truth. Far enough that I refuse to pronounce the falsehood that I don’t care about anything else.
Second, it is unfair even to the toy model to say that it gives zero utility to believing what is true. It assigns a utility of 1 to believing a truth, and therefore 0.8 to a probability of 80% of believing a truth. But the total utility of believing something with a probability of 80% is less, because that probability implies a 20% chance of believing something false, which has negative utility. Finally, in the model, the person adds in utility or disutility from other factors, and ends up with a overall negative utility for believing something that has an 80% chance of being true. I.e. not “truth” and not “zero utility.” In particular, to the degree that it is true or probably true, that adds utility. Believing a falsehood with the same consequences, in this model, would have even lower utility.
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
Sure, I’ll admit this :-)
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.