For example, the conjunction of all of your beliefs with “some of my beliefs are false” is logically inconsistent.
I don’t see how that conjunction is logically inconsistent? (Believing “all my beliefs are false” would be logically inconsistent, but I doubt any sensible person believes that.
I think consistency is good. A map that is not consistent with itself cannot be used for the purposes of predicting the territory. And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference. An inconsistent map is useless. I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful.
The same sort of thing happens in the process of making your preferences consistent.
An example please? I cannot fathom a reason to possess inconsistent preferences. An agent with inconsistent preferences cannot make rational choices in decision problems involving those preferences. Decision theory requires your preferences first be consistent before any normative rules can be applied. Inconsistent preferences result in a money pump. Consistent preferences are strictly more useful than inconsistent preferences.
That does not mean that no possible set of preferences would be both sane and consistent.
Assuming that “sane” preferences are useful (if usefulness is not a characteristic of sane preferences, then I don’t want sane preferences), I make the following claim:
″ I don’t see how that conjunction is logically inconsistent?” Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent.
″ I think consistency is good. ” I agree.
″ A map that is not consistent with itself cannot be used for the purposes of predicting the territory.” This is incorrect. It can predict two different things depending on which part is used, and one of those two will be correct.
″ And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference.” This is completely wrong, as we can see from the example of recognizing the fact that you have false beliefs. You do not know which ones are false, but you can use this map, for example by investigating your beliefs to find out which ones are false.
″ An inconsistent map is useless. ” False, as we can see from the example.
″ I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful. ” I agree, but I am pointing out that it is not infinitely useful, and that truth is even more useful than consistency. Truth (for example “I have some false beliefs”) is more useful than the consistent but false claim that I have no false beliefs.
″ An example please? I cannot fathom a reason to possess inconsistent preferences.” It is not a question of having a reason to have inconsistent preferences, just as we were not talking about reasons to have inconsistent beliefs as though that were virtuous in itself. The reason for having inconsistent beliefs (in the example) is that any specific way to prevent your beliefs from being inconsistent will be stupid: if you arbitrarily flip A, B, or C, that will be stupid because it is arbitrary, and if you say “all of my beliefs are true,” that will be stupid because it is false. Inconsistency is not beneficial in itself, but it is more important to avoid stupidity. In the same way, suppose there is someone offering you the lifespan dilemma. If at the end you say, “Nope, I don’t want to commit suicide,” that will be like saying “some of my beliefs are false.” There will be an inconsistency, but getting rid of it will be worse.
(That said, it is even better to see how you can consistently avoid suicide. But if the only way you have to avoid suicide is an inconsistent one, that is better than nothing.)
″ Consistent preferences are strictly more useful than inconsistent preferences.” This is false, just as in the case of beliefs, if your consistent preferences lead you to suicide, and your inconsistent ones do not.
″ Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. ”
But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in “At least one of A,B,C is false”.
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, “That man is about 6 feet tall,” you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say “the sky is blue,” it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone’s height.
In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.
As is written in the page: Rejecting the Lifespan Dilemma seems to require either rejecting expected utility maximization, or using a bounded utility function.
I don’t see how that conjunction is logically inconsistent? (Believing “all my beliefs are false” would be logically inconsistent, but I doubt any sensible person believes that.
I think consistency is good. A map that is not consistent with itself cannot be used for the purposes of predicting the territory. And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference. An inconsistent map is useless. I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful.
An example please? I cannot fathom a reason to possess inconsistent preferences. An agent with inconsistent preferences cannot make rational choices in decision problems involving those preferences. Decision theory requires your preferences first be consistent before any normative rules can be applied. Inconsistent preferences result in a money pump. Consistent preferences are strictly more useful than inconsistent preferences.
Assuming that “sane” preferences are useful (if usefulness is not a characteristic of sane preferences, then I don’t want sane preferences), I make the following claim:
″ I don’t see how that conjunction is logically inconsistent?” Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent.
″ I think consistency is good. ” I agree.
″ A map that is not consistent with itself cannot be used for the purposes of predicting the territory.” This is incorrect. It can predict two different things depending on which part is used, and one of those two will be correct.
″ And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference.” This is completely wrong, as we can see from the example of recognizing the fact that you have false beliefs. You do not know which ones are false, but you can use this map, for example by investigating your beliefs to find out which ones are false.
″ An inconsistent map is useless. ” False, as we can see from the example.
″ I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful. ” I agree, but I am pointing out that it is not infinitely useful, and that truth is even more useful than consistency. Truth (for example “I have some false beliefs”) is more useful than the consistent but false claim that I have no false beliefs.
″ An example please? I cannot fathom a reason to possess inconsistent preferences.” It is not a question of having a reason to have inconsistent preferences, just as we were not talking about reasons to have inconsistent beliefs as though that were virtuous in itself. The reason for having inconsistent beliefs (in the example) is that any specific way to prevent your beliefs from being inconsistent will be stupid: if you arbitrarily flip A, B, or C, that will be stupid because it is arbitrary, and if you say “all of my beliefs are true,” that will be stupid because it is false. Inconsistency is not beneficial in itself, but it is more important to avoid stupidity. In the same way, suppose there is someone offering you the lifespan dilemma. If at the end you say, “Nope, I don’t want to commit suicide,” that will be like saying “some of my beliefs are false.” There will be an inconsistency, but getting rid of it will be worse.
(That said, it is even better to see how you can consistently avoid suicide. But if the only way you have to avoid suicide is an inconsistent one, that is better than nothing.)
″ Consistent preferences are strictly more useful than inconsistent preferences.” This is false, just as in the case of beliefs, if your consistent preferences lead you to suicide, and your inconsistent ones do not.
″ Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. ”
But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in “At least one of A,B,C is false”.
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, “That man is about 6 feet tall,” you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say “the sky is blue,” it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone’s height.
In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.
What is the lifespan dilemma?
https://wiki.lesswrong.com/wiki/Lifespan_dilemma
As is written in the page: Rejecting the Lifespan Dilemma seems to require either rejecting expected utility maximization, or using a bounded utility function.