Should you believe (or not) certain things depending on what other people might call you?
Mostly no, but in principle it is possible. If you do not want to be called something, a greater probability that you will be called that thing given that you believe something, means a lower utility from believing it. So if you care about what you are called as well as caring about truth, you might have to trade away some truth for the sake of what you are called.
More importantly, my comment included the word “deservedly.” If you say something true, and you have good reason to say it, and people call you a racist, that will be undeserved. If you are called that deservedly, it either means the thing was false, or at least that you did not have a good reason to say it. In the case of saying those people were not human, it would be both false, and something there is no good reason to say.
Lots of verbiage, but I still don’t understand your point.
People call other people many things. If you believe that AI is dangerous, some people will call you an idiot. If you are gay, some people will call you a moral degenerate. If you’re Snowden, some people will call you a hero and other people will call you a traitor. So what?
As to that “good reason to say it”, who judges what’s a good reason and what is not?
The point is that deciding to say something, or even deciding to believe it, is like any other decision, like deciding to go to the store. Human beings care about many things, and therefore many things can affect their decisions, including about what to believe. Let me give an example:
Suppose you think there is an 80% chance that global warming theory is correct. You say, “If I believe that the theory is correct, there will be an 80% chance that I am believing the truth, and a 20% chance that I am believing a falsehood. I get a unit of utility from believing the truth, and a negative unit from believing a falsehood. So that will give me 0.6 expected utility from believing the theory. Consequently I will believe it.”
But suppose you also think there is an 80% chance that black people have a lower average IQ than white people. You say, “As in the other case, there is a positive expected utility from the probability of believing the truth, if I believe this. But there is a 99% chance that people will call me a racist, and being called a racist has a utility of −0.8. Consequently the total expected utility of believing the theory is −0.192. Therefore I am not going to believe it.” Note that if there was a 99% chance that the theory was true in this case, your expected utility would be 0.18, which would be positive, so you would probably choose to believe it. So being called a racist can affect whether you believe it, but it will affect it less when you consider more probable theories.
As to that “good reason to say it”, who judges what’s a good reason and what is not?
If you mean whose judgement determines it, no one’s does, just as no one’s judgement determines whether the earth goes around the sun.
The point is that deciding to say something, or even deciding to believe it, is like any other decision, like deciding to go to the store.
Deciding to say something, sure, but deciding to believe is a bit different. Your degree of conscious control is much more limited there. You can try to persuade yourself, but yourself might not be willing to be persuaded :-/
Suppose you think there is an 80% chance that global warming theory is correct… Consequently I will believe it.
Huh? One of the most basic lessons of LW is that belief in propositions is not binary but a fraction between 0 and 1 which we usually call probability. If you think there is an 80% chance that the global warming theory is correct, this is your belief. I don’t see any need to make it an “I believe it fully and with all my heart” thing.
Consequently the total expected utility of believing the theory is −0.192. Therefore I am not going to believe it
Correct. This is precisely the difference between people who care about what reality actually is and people who are mostly concerned with society’s approval.
Your degree of conscious control is much more limited there. You can try to persuade yourself, but yourself might not be willing to be persuaded :-/
I agree that there is often more difficulty, but there is no difference in principle from the fact that you might decide to go to the store, but suddenly be overcome by a wave of laziness so that you end up staying home playing video games.
Huh? One of the most basic lessons of LW is that belief in propositions is not binary but a fraction between 0 and 1 which we usually call probability. If you think there is an 80% chance that the global warming theory is correct, this is your belief.
It is a question of being practical. I agree with thinking of probabilities as formalizing degrees of belief, but it is not practical to be constantly saying “there is an 80% chance of such and such,” or even thinking about it in this way. Instead, you prefer to say and think, “this is how it is.” Roughly you can analyze “decide to believe this” as “decide to start treating this as a fact.” So if you decide to believe the global warming theory, you will say things like “global warming is happening.” That will not necessarily prevent you from admitting that the probability is 80%, if someone asks you specifically about the probability.
This is precisely the difference between people who care about what reality actually is and people who are mostly concerned with society’s approval.
Choose your side.
All humans care at least a little about truth, but also about other things. So you cannot divide people up into people who care about what reality actually is and people who care about other things like society’s approval—everyone cares a bit about both. Consequently, if some people say, “we care only about truth, and nothing else,” those people are saying something false. So why are they saying it? Most likely, it is precisely because of one of the things they care about other than truth: namely looking impressive. Since I care more about truth than most people, including the people who want to look impressive, I will tell the truth about this: I care about truth, but I also care about other things, and the other things I care about can affect not only my actions, but also my thoughts and beliefs.
If we believe that global warming of exactly +2 C is going to happen within 100 years with 99.9% probability the most reasonable response is to do geoengineering to counteract those +2 C.
One of the primary reasons for choosing a different strategy is that there’s a lot of uncertainty involved.
If you grant a 20% chance that global warming isn’t happening that geoengineering project would have the potential to mess up a lot.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
Elon Musk says that he thought there was a 10% chance when he started SpaceX that it would become a successful company. If he would have followed your advice he wouldn’t have started that valuable company and the same is likely true for many other founders who have a realistic view of their success.
One of the core reasons why Eliezer wrote the sequences is to promote the idea that low probability high impact events matter a great deal and as a result, we should invest money into X-risk prevention.
So you cannot divide people up into people who care about what reality actually is and people who care about other things like society’s approval—everyone cares a bit about both
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
You did not understand the proposal. Let’s analyze a situation like that. Suppose there is a physics experiment which has a 99% chance to be safe, and a 1% chance to destroy humanity. The people in charge ask, “Should we accept it as a fact that the experiment will be safe?”
According to our previous stipulations, the utility of believing that it is safe will be 0.99, minus the disutility of believing that it is safe when it is not, so a total of 0.98, considering only the elements of truth and falsehood.
But presumably people care about not destroying humanity as well. Let’s say the utility of destroying humanity is negative 1,000,000. Then the total utility of treating it as a fact that the experiment will be safe will be 0.98 - (0.01 * 1,000,000), or in other words, −9,999.02. Very bad. So they will choose not to believe it. Nor does that mean that they will believe the opposite: this would a utility of −0.98, which is still negative. So they will choose to believe neither, and simply say, “It would probably be safe, but there would be too much risk of destroying humanity, so we will not do it.” This is presumably the result that you want, and it is also the result from following my proposal.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
This should be analyzed in the same way. You do not choose to say, “This will definitely not be accepted,” because you will not maximize your utility that way. Instead, you say, “This will probably not be accepted, but it might be.”
In other words, you seem to think that I was proposing a threshold where if something has a certain probability, you suddenly decide to accept it as a fact. There is no such threshold, and I did not propose one. Depending on the case, you will choose to treat something as a fact, when it will maximize your utility to do so. Thus for example when you look out the window and see rain, you say, “It is raining,” rather than “It is probably raining,” because the cost of adding the qualification in every instance is greater than the benefit of just being right about the rain, given the small risk of being wrong and the harmlessness of that in most cases.
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did. I am against the falsehood of asserting that you care only about truth. No truthseeker would claim such a thing; but a status seeker might claim it.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did.
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
I am against the falsehood of asserting that you care only about truth.
That feels to me like a strawman. Who made such an assertion?
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
If someone says that Jews are not human, he would deservedly be called a racist. That has nothing to do with attacking truthseeking norms, because the claim about Jews is utterly false. The same thing applies to the situation discussed.
That feels to me like a strawman. Who made such an assertion?
I did not know you were talking about the discussion of racism. I thought you were talking about the fact that I said that other terms in your utility function besides truth should affect what you do (including what you treat as a fact, since that is something that you do.) That seems to me a reasonable interpretation of what you said, given that your main criticism seemed to be about this.
If someone says that Jews are not human, he would deservedly be called a racist.
Communication always focuses on a subset of the available facts. You can make a choice to focus on influencing other people to believe certain things by appealing to rational argument.
Here you made the choice to influence other people by appealing to the social desirability of holding certain beliefs.
Making that choice damages truth-seeking norms.
Whether someone “deserve” something is also a moral judgement and not just a statement of objective facts.
I think it might be more obvious to someone that saying that Jews are not human deserves moral opprobrium, than that Jews are human. If you are not a moral realist, you might think this is impossible, but I am a moral realist, and I don’t see any reason why the moral statement might not be more obvious. In particular, I think it would be likely be true for many people in the case discussed. In that case, there is no reason not to bring it up in a discussion of this kind, since it is normal to lead people from what is more obvious to what is less obvious. And there is nothing against truthseeking norms in doing that.
I suspect that you will disagree, but your disagreement would be like a conservative economist saying “minimum wages are harmful, so if you propose minimum wages you are hurting people.” The person proposing minimum wages might in fact be hurting people, but this is definitely not what they are trying to do. And as I said originally, I was not attacking truthseeking or truthseeking norms in any way. (And I am not saying that I am wrong in fact in this way either—I am just saying that you should not be attacking my motives in that way.)
I think there is. One big difference is that the algorithm you need to follow to get to the store is clear, simple, and known. But you don’t know which algorithm to follow to make yourself believe some arbitrary thing.
It is a question of being practical.
I see absolutely no practical problems in labeling my beliefs as “pretty sure it’s true”, “likely true”, “more likely than not”, etc. I do NOT prefer to ‘say and think, “this is how it is.”’
So you cannot divide people up into people who care about what reality actually is and people who care about other things like society’s approval—everyone cares a bit about both.
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
You explicitly said:
Consequently the total expected utility of believing the theory is −0.192. Therefore I am not going to believe it.
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
How much manipulation of your utility function will be necessary to make you truly love the Big Brother?
But you don’t know which algorithm to follow to make yourself believe some arbitrary thing.
Actually, I do. I already said that believing something is basically the same as treating it as a fact, and I know how to treat something as a fact. Again, I might not want to treat it as a fact, but that is no different from not wanting to go to the store: the algorithm is equally clear.
I see absolutely no practical problems in labeling my beliefs as “pretty sure it’s true”, “likely true”, “more likely than not”, etc. I do NOT prefer to ‘say and think, “this is how it is.”’
Your comment history contains many flat out factual claims without any such qualification. Thus your revealed preferences show that you agree with me.
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
I agree that there is such a gradient, but that is quite different from a black and white division into people who care about truth and people who don’t, as you suggested before. This is practically parallel to the discussion of the binary belief idea: if you don’t like the binary beliefs, you should also admit that there is no binary division of people who care about truth and people who don’t.
You explicitly said:
Consequently the total expected utility of believing the theory is -0.192. Therefore I am not going to believe it.
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
First of all, that was a toy model and not a representation of my personal opinions, which is why it started out, “But suppose you also think there is an 80% chance...” If you are asking about my real position on that gradient, I am pretty far into the extreme end of caring about truth. Far enough that I refuse to pronounce the falsehood that I don’t care about anything else.
Second, it is unfair even to the toy model to say that it gives zero utility to believing what is true. It assigns a utility of 1 to believing a truth, and therefore 0.8 to a probability of 80% of believing a truth. But the total utility of believing something with a probability of 80% is less, because that probability implies a 20% chance of believing something false, which has negative utility. Finally, in the model, the person adds in utility or disutility from other factors, and ends up with a overall negative utility for believing something that has an 80% chance of being true. I.e. not “truth” and not “zero utility.” In particular, to the degree that it is true or probably true, that adds utility. Believing a falsehood with the same consequences, in this model, would have even lower utility.
believing something is basically the same as treating it as a fact, and I know how to treat something as a fact
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Your comment history contains many flat out factual claims without any such qualification
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
you should also admit that there is no binary division of people who care about truth and people who don’t.
Sure, I’ll admit this :-)
It assigns a utility of 1 to believing a truth
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.
No.
Mostly no, but in principle it is possible. If you do not want to be called something, a greater probability that you will be called that thing given that you believe something, means a lower utility from believing it. So if you care about what you are called as well as caring about truth, you might have to trade away some truth for the sake of what you are called.
More importantly, my comment included the word “deservedly.” If you say something true, and you have good reason to say it, and people call you a racist, that will be undeserved. If you are called that deservedly, it either means the thing was false, or at least that you did not have a good reason to say it. In the case of saying those people were not human, it would be both false, and something there is no good reason to say.
Lots of verbiage, but I still don’t understand your point.
People call other people many things. If you believe that AI is dangerous, some people will call you an idiot. If you are gay, some people will call you a moral degenerate. If you’re Snowden, some people will call you a hero and other people will call you a traitor. So what?
As to that “good reason to say it”, who judges what’s a good reason and what is not?
The point is that deciding to say something, or even deciding to believe it, is like any other decision, like deciding to go to the store. Human beings care about many things, and therefore many things can affect their decisions, including about what to believe. Let me give an example:
Suppose you think there is an 80% chance that global warming theory is correct. You say, “If I believe that the theory is correct, there will be an 80% chance that I am believing the truth, and a 20% chance that I am believing a falsehood. I get a unit of utility from believing the truth, and a negative unit from believing a falsehood. So that will give me 0.6 expected utility from believing the theory. Consequently I will believe it.”
But suppose you also think there is an 80% chance that black people have a lower average IQ than white people. You say, “As in the other case, there is a positive expected utility from the probability of believing the truth, if I believe this. But there is a 99% chance that people will call me a racist, and being called a racist has a utility of −0.8. Consequently the total expected utility of believing the theory is −0.192. Therefore I am not going to believe it.” Note that if there was a 99% chance that the theory was true in this case, your expected utility would be 0.18, which would be positive, so you would probably choose to believe it. So being called a racist can affect whether you believe it, but it will affect it less when you consider more probable theories.
If you mean whose judgement determines it, no one’s does, just as no one’s judgement determines whether the earth goes around the sun.
Deciding to say something, sure, but deciding to believe is a bit different. Your degree of conscious control is much more limited there. You can try to persuade yourself, but yourself might not be willing to be persuaded :-/
Huh? One of the most basic lessons of LW is that belief in propositions is not binary but a fraction between 0 and 1 which we usually call probability. If you think there is an 80% chance that the global warming theory is correct, this is your belief. I don’t see any need to make it an “I believe it fully and with all my heart” thing.
Correct. This is precisely the difference between people who care about what reality actually is and people who are mostly concerned with society’s approval.
Choose your side.
I agree that there is often more difficulty, but there is no difference in principle from the fact that you might decide to go to the store, but suddenly be overcome by a wave of laziness so that you end up staying home playing video games.
It is a question of being practical. I agree with thinking of probabilities as formalizing degrees of belief, but it is not practical to be constantly saying “there is an 80% chance of such and such,” or even thinking about it in this way. Instead, you prefer to say and think, “this is how it is.” Roughly you can analyze “decide to believe this” as “decide to start treating this as a fact.” So if you decide to believe the global warming theory, you will say things like “global warming is happening.” That will not necessarily prevent you from admitting that the probability is 80%, if someone asks you specifically about the probability.
All humans care at least a little about truth, but also about other things. So you cannot divide people up into people who care about what reality actually is and people who care about other things like society’s approval—everyone cares a bit about both. Consequently, if some people say, “we care only about truth, and nothing else,” those people are saying something false. So why are they saying it? Most likely, it is precisely because of one of the things they care about other than truth: namely looking impressive. Since I care more about truth than most people, including the people who want to look impressive, I will tell the truth about this: I care about truth, but I also care about other things, and the other things I care about can affect not only my actions, but also my thoughts and beliefs.
If we believe that global warming of exactly +2 C is going to happen within 100 years with 99.9% probability the most reasonable response is to do geoengineering to counteract those +2 C. One of the primary reasons for choosing a different strategy is that there’s a lot of uncertainty involved.
If you grant a 20% chance that global warming isn’t happening that geoengineering project would have the potential to mess up a lot.
If people in charge follow the epistemology that you propose I think there’s a good chance that humanity won’t survive this century because someone thinks taking a 1% chance to destroy humanity isn’t a problem.
Any single job application I sent out has a less a more than 80% chance of being rejected. If I would follow your practical advice I wouldn’t sent out any applications. That’s pretty stupid “practical advice”.
Elon Musk says that he thought there was a 10% chance when he started SpaceX that it would become a successful company. If he would have followed your advice he wouldn’t have started that valuable company and the same is likely true for many other founders who have a realistic view of their success.
One of the core reasons why Eliezer wrote the sequences is to promote the idea that low probability high impact events matter a great deal and as a result, we should invest money into X-risk prevention.
While that’s true there’s a lot of value of having norms of discussion on LW that uphold the ideal of truth. I don’t think it’s good to try to act against truthseeking norms like you do here.
You did not understand the proposal. Let’s analyze a situation like that. Suppose there is a physics experiment which has a 99% chance to be safe, and a 1% chance to destroy humanity. The people in charge ask, “Should we accept it as a fact that the experiment will be safe?”
According to our previous stipulations, the utility of believing that it is safe will be 0.99, minus the disutility of believing that it is safe when it is not, so a total of 0.98, considering only the elements of truth and falsehood.
But presumably people care about not destroying humanity as well. Let’s say the utility of destroying humanity is negative 1,000,000. Then the total utility of treating it as a fact that the experiment will be safe will be 0.98 - (0.01 * 1,000,000), or in other words, −9,999.02. Very bad. So they will choose not to believe it. Nor does that mean that they will believe the opposite: this would a utility of −0.98, which is still negative. So they will choose to believe neither, and simply say, “It would probably be safe, but there would be too much risk of destroying humanity, so we will not do it.” This is presumably the result that you want, and it is also the result from following my proposal.
This should be analyzed in the same way. You do not choose to say, “This will definitely not be accepted,” because you will not maximize your utility that way. Instead, you say, “This will probably not be accepted, but it might be.”
In other words, you seem to think that I was proposing a threshold where if something has a certain probability, you suddenly decide to accept it as a fact. There is no such threshold, and I did not propose one. Depending on the case, you will choose to treat something as a fact, when it will maximize your utility to do so. Thus for example when you look out the window and see rain, you say, “It is raining,” rather than “It is probably raining,” because the cost of adding the qualification in every instance is greater than the benefit of just being right about the rain, given the small risk of being wrong and the harmlessness of that in most cases.
I am in favor of truth and truthseeking norms, and I most definitely did not “try to act against truthseeking norms” as you suggest that I did. I am against the falsehood of asserting that you care only about truth. No truthseeker would claim such a thing; but a status seeker might claim it.
If you value those norms there no reason to say “But even if there hadn’t been, anyone saying that those people were not human would deservedly get called a racist” and defend that notion.
That feels to me like a strawman. Who made such an assertion?
If someone says that Jews are not human, he would deservedly be called a racist. That has nothing to do with attacking truthseeking norms, because the claim about Jews is utterly false. The same thing applies to the situation discussed.
I did not know you were talking about the discussion of racism. I thought you were talking about the fact that I said that other terms in your utility function besides truth should affect what you do (including what you treat as a fact, since that is something that you do.) That seems to me a reasonable interpretation of what you said, given that your main criticism seemed to be about this.
Communication always focuses on a subset of the available facts. You can make a choice to focus on influencing other people to believe certain things by appealing to rational argument. Here you made the choice to influence other people by appealing to the social desirability of holding certain beliefs.
Making that choice damages truth-seeking norms.
Whether someone “deserve” something is also a moral judgement and not just a statement of objective facts.
I think it might be more obvious to someone that saying that Jews are not human deserves moral opprobrium, than that Jews are human. If you are not a moral realist, you might think this is impossible, but I am a moral realist, and I don’t see any reason why the moral statement might not be more obvious. In particular, I think it would be likely be true for many people in the case discussed. In that case, there is no reason not to bring it up in a discussion of this kind, since it is normal to lead people from what is more obvious to what is less obvious. And there is nothing against truthseeking norms in doing that.
I suspect that you will disagree, but your disagreement would be like a conservative economist saying “minimum wages are harmful, so if you propose minimum wages you are hurting people.” The person proposing minimum wages might in fact be hurting people, but this is definitely not what they are trying to do. And as I said originally, I was not attacking truthseeking or truthseeking norms in any way. (And I am not saying that I am wrong in fact in this way either—I am just saying that you should not be attacking my motives in that way.)
My charge isn’t about motives but about effects.
Fine. I disagree with your assessment.
I think there is. One big difference is that the algorithm you need to follow to get to the store is clear, simple, and known. But you don’t know which algorithm to follow to make yourself believe some arbitrary thing.
I see absolutely no practical problems in labeling my beliefs as “pretty sure it’s true”, “likely true”, “more likely than not”, etc. I do NOT prefer to ‘say and think, “this is how it is.”’
I can easily set up a gradient with something like Amicus Plato, sed magis amica veritas at one end and somebody completely unprincipled on the other.
You explicitly said:
which actually gives zero utility to believing what is true. That puts you in a rather extreme position on that gradient.
How much manipulation of your utility function will be necessary to make you truly love the Big Brother?
Actually, I do. I already said that believing something is basically the same as treating it as a fact, and I know how to treat something as a fact. Again, I might not want to treat it as a fact, but that is no different from not wanting to go to the store: the algorithm is equally clear.
Your comment history contains many flat out factual claims without any such qualification. Thus your revealed preferences show that you agree with me.
I agree that there is such a gradient, but that is quite different from a black and white division into people who care about truth and people who don’t, as you suggested before. This is practically parallel to the discussion of the binary belief idea: if you don’t like the binary beliefs, you should also admit that there is no binary division of people who care about truth and people who don’t.
First of all, that was a toy model and not a representation of my personal opinions, which is why it started out, “But suppose you also think there is an 80% chance...” If you are asking about my real position on that gradient, I am pretty far into the extreme end of caring about truth. Far enough that I refuse to pronounce the falsehood that I don’t care about anything else.
Second, it is unfair even to the toy model to say that it gives zero utility to believing what is true. It assigns a utility of 1 to believing a truth, and therefore 0.8 to a probability of 80% of believing a truth. But the total utility of believing something with a probability of 80% is less, because that probability implies a 20% chance of believing something false, which has negative utility. Finally, in the model, the person adds in utility or disutility from other factors, and ends up with a overall negative utility for believing something that has an 80% chance of being true. I.e. not “truth” and not “zero utility.” In particular, to the degree that it is true or probably true, that adds utility. Believing a falsehood with the same consequences, in this model, would have even lower utility.
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
Sure, I’ll admit this :-)
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.