For instance I think it is preferable to give 1,000,000$ to a poor family rather than 3^^^3$ to 3^^^3 middle class families,
I agree, but only because of the massive inflation caused by the first option making it a net negative in my utility function. Assuming we are talking about earth here, I believe the following:
It is preferable to give $10,000,000 to 10,000 middle class families, than to give $1,000,000 to a single poor class family.
If you are consistent, then you disagree with the above. If you agree with the above, then you are inconsistent. I have only two moral axioms (so far):
″ If you are inconsistent, then please fix that. :) ”
G.K. Chesterton made a lot of errors which he always managed to state in interesting ways. However, one thing he got right was the idea that a lot of insanity comes from an excessive insistence on consistency.
Consider the process of improving your beliefs. You may find out that they have some inconsistencies between one another, and you might want to fix that. But it is far more important to preserve consistency with reality than interal consistency, and an inordinate insistence on internal consistency can amount to insanity. For example, the conjunction of all of your beliefs with “some of my beliefs are false” is logically inconsistent. You could get more consistency by insisting that “all of my beliefs are true, without any exception.” But that procedure would obviously amount to an insane arrogance.
The same sort of thing happens in the process of making your preferences consistent. Consistency is a good thing, but it would be stupid to follow it off a cliff. That does not mean that no possible set of preferences would be both sane and consistent. There are surely many such possible sets. But it is far more important to remain sane than to adopt a consistent, but insane, set of preferences.
For example, the conjunction of all of your beliefs with “some of my beliefs are false” is logically inconsistent.
I don’t see how that conjunction is logically inconsistent? (Believing “all my beliefs are false” would be logically inconsistent, but I doubt any sensible person believes that.
I think consistency is good. A map that is not consistent with itself cannot be used for the purposes of predicting the territory. And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference. An inconsistent map is useless. I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful.
The same sort of thing happens in the process of making your preferences consistent.
An example please? I cannot fathom a reason to possess inconsistent preferences. An agent with inconsistent preferences cannot make rational choices in decision problems involving those preferences. Decision theory requires your preferences first be consistent before any normative rules can be applied. Inconsistent preferences result in a money pump. Consistent preferences are strictly more useful than inconsistent preferences.
That does not mean that no possible set of preferences would be both sane and consistent.
Assuming that “sane” preferences are useful (if usefulness is not a characteristic of sane preferences, then I don’t want sane preferences), I make the following claim:
″ I don’t see how that conjunction is logically inconsistent?” Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent.
″ I think consistency is good. ” I agree.
″ A map that is not consistent with itself cannot be used for the purposes of predicting the territory.” This is incorrect. It can predict two different things depending on which part is used, and one of those two will be correct.
″ And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference.” This is completely wrong, as we can see from the example of recognizing the fact that you have false beliefs. You do not know which ones are false, but you can use this map, for example by investigating your beliefs to find out which ones are false.
″ An inconsistent map is useless. ” False, as we can see from the example.
″ I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful. ” I agree, but I am pointing out that it is not infinitely useful, and that truth is even more useful than consistency. Truth (for example “I have some false beliefs”) is more useful than the consistent but false claim that I have no false beliefs.
″ An example please? I cannot fathom a reason to possess inconsistent preferences.” It is not a question of having a reason to have inconsistent preferences, just as we were not talking about reasons to have inconsistent beliefs as though that were virtuous in itself. The reason for having inconsistent beliefs (in the example) is that any specific way to prevent your beliefs from being inconsistent will be stupid: if you arbitrarily flip A, B, or C, that will be stupid because it is arbitrary, and if you say “all of my beliefs are true,” that will be stupid because it is false. Inconsistency is not beneficial in itself, but it is more important to avoid stupidity. In the same way, suppose there is someone offering you the lifespan dilemma. If at the end you say, “Nope, I don’t want to commit suicide,” that will be like saying “some of my beliefs are false.” There will be an inconsistency, but getting rid of it will be worse.
(That said, it is even better to see how you can consistently avoid suicide. But if the only way you have to avoid suicide is an inconsistent one, that is better than nothing.)
″ Consistent preferences are strictly more useful than inconsistent preferences.” This is false, just as in the case of beliefs, if your consistent preferences lead you to suicide, and your inconsistent ones do not.
″ Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. ”
But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in “At least one of A,B,C is false”.
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, “That man is about 6 feet tall,” you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say “the sky is blue,” it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone’s height.
In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.
As is written in the page: Rejecting the Lifespan Dilemma seems to require either rejecting expected utility maximization, or using a bounded utility function.
I think that I am consistent: as you said I disagree with the above, however my disagreement in this case is slightly minor (compared to the 3^^^3$ to 3^^^3 middle class families option) because the level of life quality improvement is starting to become more relevant. Nevertheless the desire to help people that are suffering for economical reason will remain greater than the desire to add happiness in the life of people who are already serene.
Ok let’s try. From the most desirable to the least desirable: 4,3,2,1,8,6,7.
Both 4 and 3 will help 100 poor families so have the priority. 2 and 1 will help one poor family so have the priority compared to the last three options. 8 and 6 will help more people compared to 7. The rest is only a quantity difference.
We do disagree I guess. However you define your utility function, 8 is worse than 1. I find this very disturbing. How did you arrive at your conclusions (it seems to me naive QALY calculations would place 8 as a clearly better option than 1).
This is my reasoning: if we assume that the middle class families have a stable economic situation, and if we assume that they have enough money to obtain food, heath care, a good home, instruction for their children etc. while the poor family’s members don’t have this comforts and are suffering hunger and diseases for that, then the poor family has the priority in my system of values: I could easily stand the lack of a villa with swimming pool for 10,000 lives if this would make me avoid a miserable life. (I think that we can simplify my ethic as a Maslow’s Hierarchy of Needs.) Of course if the middle class families would donate lots of money to poor families, my answer would change.
But there are ten thousand middle class families, and just one poor family? Among those ten thousand, what about the chance that the money e.g. provides necessary funds to:
Send their children to Ivy League schools.
Provide necessary treatment for debilitating illnesses.
Pay off debt.
Otherwise drastically improve their quality of life?
Good points I admit to have not considered. I live in a country where health care and instruction can be afforded by middle class families and as I have already written I assumed that their economical situation was stable. If we consider this factors then my answer will change.
Even if they have stable economic condition, I still expect any sensible utilitarian calculation to prefer helping 10,000 middle class families as opposed to one poor family. How exactly did you calculate helping one poor family as better?
As I tried to express in my post, I think that here are different “levels of life quality”. For me, people in the lower levels, have the priority. I adopt utilitarianism only when I have to choose what is better in the same level.
The post’s purpose wasn’t to convince someone that my values are right. I only want to show throught some examples that, even though some limits are nebulous, we can agree that things that are very distant from the edge can be associated to two different layer.
I only want to add that switching from one level to another has the highest value. So saving people who are fine is still important, because dying would make them fall from a level X to 0.
Congratulations on your first post.
I agree, but only because of the massive inflation caused by the first option making it a net negative in my utility function. Assuming we are talking about earth here, I believe the following:
If you are consistent, then you disagree with the above. If you agree with the above, then you are inconsistent. I have only two moral axioms (so far):
Be consistent.
Maximise your utility.
If you are inconsistent, then please fix that. :)
″ If you are inconsistent, then please fix that. :) ”
G.K. Chesterton made a lot of errors which he always managed to state in interesting ways. However, one thing he got right was the idea that a lot of insanity comes from an excessive insistence on consistency.
Consider the process of improving your beliefs. You may find out that they have some inconsistencies between one another, and you might want to fix that. But it is far more important to preserve consistency with reality than interal consistency, and an inordinate insistence on internal consistency can amount to insanity. For example, the conjunction of all of your beliefs with “some of my beliefs are false” is logically inconsistent. You could get more consistency by insisting that “all of my beliefs are true, without any exception.” But that procedure would obviously amount to an insane arrogance.
The same sort of thing happens in the process of making your preferences consistent. Consistency is a good thing, but it would be stupid to follow it off a cliff. That does not mean that no possible set of preferences would be both sane and consistent. There are surely many such possible sets. But it is far more important to remain sane than to adopt a consistent, but insane, set of preferences.
I don’t see how that conjunction is logically inconsistent? (Believing “all my beliefs are false” would be logically inconsistent, but I doubt any sensible person believes that.
I think consistency is good. A map that is not consistent with itself cannot be used for the purposes of predicting the territory. And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference. An inconsistent map is useless. I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful.
An example please? I cannot fathom a reason to possess inconsistent preferences. An agent with inconsistent preferences cannot make rational choices in decision problems involving those preferences. Decision theory requires your preferences first be consistent before any normative rules can be applied. Inconsistent preferences result in a money pump. Consistent preferences are strictly more useful than inconsistent preferences.
Assuming that “sane” preferences are useful (if usefulness is not a characteristic of sane preferences, then I don’t want sane preferences), I make the following claim:
″ I don’t see how that conjunction is logically inconsistent?” Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent.
″ I think consistency is good. ” I agree.
″ A map that is not consistent with itself cannot be used for the purposes of predicting the territory.” This is incorrect. It can predict two different things depending on which part is used, and one of those two will be correct.
″ And inconsistent map (especially one where the form and extent of inconsistency is unknown (save that the map is inconsistent)) cannot be used for inference.” This is completely wrong, as we can see from the example of recognizing the fact that you have false beliefs. You do not know which ones are false, but you can use this map, for example by investigating your beliefs to find out which ones are false.
″ An inconsistent map is useless. ” False, as we can see from the example.
″ I don’t want consistency because consistency is desirable in and of itself—I want consistency because it is useful. ” I agree, but I am pointing out that it is not infinitely useful, and that truth is even more useful than consistency. Truth (for example “I have some false beliefs”) is more useful than the consistent but false claim that I have no false beliefs.
″ An example please? I cannot fathom a reason to possess inconsistent preferences.” It is not a question of having a reason to have inconsistent preferences, just as we were not talking about reasons to have inconsistent beliefs as though that were virtuous in itself. The reason for having inconsistent beliefs (in the example) is that any specific way to prevent your beliefs from being inconsistent will be stupid: if you arbitrarily flip A, B, or C, that will be stupid because it is arbitrary, and if you say “all of my beliefs are true,” that will be stupid because it is false. Inconsistency is not beneficial in itself, but it is more important to avoid stupidity. In the same way, suppose there is someone offering you the lifespan dilemma. If at the end you say, “Nope, I don’t want to commit suicide,” that will be like saying “some of my beliefs are false.” There will be an inconsistency, but getting rid of it will be worse.
(That said, it is even better to see how you can consistently avoid suicide. But if the only way you have to avoid suicide is an inconsistent one, that is better than nothing.)
″ Consistent preferences are strictly more useful than inconsistent preferences.” This is false, just as in the case of beliefs, if your consistent preferences lead you to suicide, and your inconsistent ones do not.
″ Suppose you have beliefs A, B, C, and belief D: “At least one of beliefs A, B, C is false.” The conjunction of A, B, C, and D is logically inconsistent. They cannot all be true, because if A, B, and C are all true, then D is false, while if D is true, at least one of the others is false. So if you think that you have some false beliefs (and everyone does), then the conjunction of that with the rest of your beliefs is logically inconsistent. ”
But beliefs are not binary propositions, they are probability statements! It is perfectly consistent to assert that I have ~68% percent confidence in A, in B, in C and in “At least one of A,B,C is false”.
Most people, most of the time, state their beliefs as binary propositions, not as probability statements. Furthermore, this is not just leaving out an actually existing detail, but it is a detail missing from reality. If I say, “That man is about 6 feet tall,” you can argue that he has an objectively precise height of 6 feet 2 inches or whatever. But if I say “the sky is blue,” it is false that there is an objectively precise probability that I have for that statement. If you push me, I might come up with the number. But I am basically making the number up: it is not something that exists like someone’s height.
In other words, in the way that is relevant, beliefs are indeed binary propositions, and not probability statements. You are quite right, however, that in the process of becoming more consistent, you might want to approach the situation of having probabilities for your beliefs. But you do not currently have them for most of your beliefs, nor does any human.
What is the lifespan dilemma?
https://wiki.lesswrong.com/wiki/Lifespan_dilemma
As is written in the page: Rejecting the Lifespan Dilemma seems to require either rejecting expected utility maximization, or using a bounded utility function.
Thank you very much!
My comment was prematurely posted, please reread it.
I think that I am consistent: as you said I disagree with the above, however my disagreement in this case is slightly minor (compared to the 3^^^3$ to 3^^^3 middle class families option) because the level of life quality improvement is starting to become more relevant. Nevertheless the desire to help people that are suffering for economical reason will remain greater than the desire to add happiness in the life of people who are already serene.
Thank you, for the opportunity of reflection.
Order the following outcomes in terms of their desirability. They are all alternative outcomes, and possess equal opportunity cost.
$1,000,000 to one poor family.
$10,000,000 to one poor family.
$1,000,000 (each) to 100 poor families.
$10,000,000 (each) to 100 poor families.
$10,000,000 (each) to 1000 middle class families.
$10,000,000 (each) to 10,000 middle class families.
$100,000,000 (each) to 1000 middle class families.
$100,000,000 (each) to 10,000 middle class families.
Assume negligible inflation results due to the distribution.
Ok let’s try. From the most desirable to the least desirable: 4,3,2,1,8,6,7.
Both 4 and 3 will help 100 poor families so have the priority. 2 and 1 will help one poor family so have the priority compared to the last three options. 8 and 6 will help more people compared to 7. The rest is only a quantity difference.
We do disagree I guess. However you define your utility function, 8 is worse than 1. I find this very disturbing. How did you arrive at your conclusions (it seems to me naive QALY calculations would place 8 as a clearly better option than 1).
This is my reasoning: if we assume that the middle class families have a stable economic situation, and if we assume that they have enough money to obtain food, heath care, a good home, instruction for their children etc. while the poor family’s members don’t have this comforts and are suffering hunger and diseases for that, then the poor family has the priority in my system of values: I could easily stand the lack of a villa with swimming pool for 10,000 lives if this would make me avoid a miserable life. (I think that we can simplify my ethic as a Maslow’s Hierarchy of Needs.) Of course if the middle class families would donate lots of money to poor families, my answer would change.
But there are ten thousand middle class families, and just one poor family? Among those ten thousand, what about the chance that the money e.g. provides necessary funds to:
Send their children to Ivy League schools.
Provide necessary treatment for debilitating illnesses.
Pay off debt.
Otherwise drastically improve their quality of life?
Good points I admit to have not considered. I live in a country where health care and instruction can be afforded by middle class families and as I have already written I assumed that their economical situation was stable. If we consider this factors then my answer will change.
Even if they have stable economic condition, I still expect any sensible utilitarian calculation to prefer helping 10,000 middle class families as opposed to one poor family. How exactly did you calculate helping one poor family as better?
As I tried to express in my post, I think that here are different “levels of life quality”. For me, people in the lower levels, have the priority. I adopt utilitarianism only when I have to choose what is better in the same level.
The post’s purpose wasn’t to convince someone that my values are right. I only want to show throught some examples that, even though some limits are nebulous, we can agree that things that are very distant from the edge can be associated to two different layer.
I only want to add that switching from one level to another has the highest value. So saving people who are fine is still important, because dying would make them fall from a level X to 0.