your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.