The only qualification of U is that it’s values map to my preferences and that it has transitive values, such that if U(a1,b1)>U(a2,b2)>U(a3,b3), then U(a3,b3)<U(a1,b1). There is no requirement that the arguments of U be measured in terms of dollars- the arguments could easily be the non-real sum of the monies provided by others and the monies provided by me.
U is a function of the number of antelope and the number of babies. By the law of transpancy, it doesn’t care whether there are 100 antelope because you saved them or because someone else did. If you do care, then your preference function cannot be described as a function on this domain.
As defined in the original post, U is a function of the total amount of money given to charities A and B. There is no restriction that more money results in more antelope or babies saved, nor that the domain of the function is limited to positive real numbers.
Or are you saying that if I care about whether I help do something important, then my preferences must be non-transitive?
He writes U(A, B), where A is the number of antelope saved and B is the number of babies saved. If you care about anything other than the number of antelope saved or the number of babies saved then U does not completely describe your preferences. Caring about whether you save the antelope or someone else does counts as caring about something other than the number of antelope saved. Unless you can exhibit a negative baby or a complex antelope, then you must accept this domain is limited to positive numbers.
He later gets, from U, a function from the amount of money given, strictly speaking this is a completely different function, it is only denoted by U for convenience. However, the fact that U was initially defined in the previous way means it may have constraints other than transitivity.
To give an example, let f be any function on the real numbers. f, currently has no constraints. We can make f into a function of vectors by saying f(x) = f(|x|), but it is not a fully general function of vectors, it has a constraint that it must satisfy, namely that it is constant on the surface of any sphere surrounding the origin.
Fair cop- I was mistaken about the definition of U.
If there is no function U(a,b) which maps to my preferences across the region which I have control, then the entire position of the original post is void.
Yes, I do not think we actually have a disagreement. The rule that you shouldn’t diversify only applies if your aim is to help people. It obviously doesn’t apply for all possible aims, as it is possible to imagine an agent with a terminal value for diversified charitable donations.
More specifically, it only applies if your goal is to help people, and your donation is not enough to noticeably change the marginal returns on investment.
The only qualification of U is that it’s values map to my preferences and that it has transitive values, such that if U(a1,b1)>U(a2,b2)>U(a3,b3), then U(a3,b3)<U(a1,b1). There is no requirement that the arguments of U be measured in terms of dollars- the arguments could easily be the non-real sum of the monies provided by others and the monies provided by me.
U is a function of the number of antelope and the number of babies. By the law of transpancy, it doesn’t care whether there are 100 antelope because you saved them or because someone else did. If you do care, then your preference function cannot be described as a function on this domain.
As defined in the original post, U is a function of the total amount of money given to charities A and B. There is no restriction that more money results in more antelope or babies saved, nor that the domain of the function is limited to positive real numbers.
Or are you saying that if I care about whether I help do something important, then my preferences must be non-transitive?
He writes U(A, B), where A is the number of antelope saved and B is the number of babies saved. If you care about anything other than the number of antelope saved or the number of babies saved then U does not completely describe your preferences. Caring about whether you save the antelope or someone else does counts as caring about something other than the number of antelope saved. Unless you can exhibit a negative baby or a complex antelope, then you must accept this domain is limited to positive numbers.
He later gets, from U, a function from the amount of money given, strictly speaking this is a completely different function, it is only denoted by U for convenience. However, the fact that U was initially defined in the previous way means it may have constraints other than transitivity.
To give an example, let f be any function on the real numbers. f, currently has no constraints. We can make f into a function of vectors by saying f(x) = f(|x|), but it is not a fully general function of vectors, it has a constraint that it must satisfy, namely that it is constant on the surface of any sphere surrounding the origin.
Fair cop- I was mistaken about the definition of U.
If there is no function U(a,b) which maps to my preferences across the region which I have control, then the entire position of the original post is void.
Yes, I do not think we actually have a disagreement. The rule that you shouldn’t diversify only applies if your aim is to help people. It obviously doesn’t apply for all possible aims, as it is possible to imagine an agent with a terminal value for diversified charitable donations.
More specifically, it only applies if your goal is to help people, and your donation is not enough to noticeably change the marginal returns on investment.
Okay, yes, thats true
Billionaires feel free to ignore the OP
Along with people who donate to small causes.
Although, it still applies to them to the extent that you should not donate to a small cause if there is a big cause that offers better returns.