I’m still in college so right now all my money is going towards school. When I finish I plan on donating everything I can. The charity I think is best right now is Schistosomiasis Control Initiative.
I intend to continue to primarily direct my charitable giving towards PSI
Why primarily? If it’s best to send some of your donations there, wouldn’t it be best to send all of them?
A common trait amongst humans is the desire to accumulate warm feelings. Optimizing for warm feelings is rarely accomplished by donating to a single charity.
Also, donating to several charities rather than only one offers a different array of signaling benefits.
True, but I don’t think donations really do much if your goal isn’t to do something. Better to donate $1 to the Fred Hollows foundation than to donate $10,000 to the Seeing Eye foundation. (The first gives cataract surgeries for one 20,000th the price the second gives guide dogs.) Also, that wouldn’t explain why he implied he gives to multiple charities. It would only explain him suggesting to other people to do it.
(The first gives cataract surgeries for one 20,000th the price the second gives guide dogs.)
How is that the price ratio? Are the cataract doctors all volunteering their time? Are dogs astronomically more expensive than I think they are, or are there no volunteer seeing eye dog trainers to be had...?
One could also look at it as hedging one’s bets, just as one would (typically) not put all of one’s money into one (boy is this getting confusing) stock in the market. Admittedly, charities aren’t really the same type of risk.
I think I’m the victim of some kind of karmassassination here. All recent comments I’ve made, even largely upvoted ones have been wildly downvoted in the last hour for some reason.
If you have a better explanation as to why I might lose over 100 karma spaced over nine comments (four of which haven’t been voted on in four days until now) in three different threads in about an hour, I would like to hear it.
One wouldn’t put all of one’s money into one stock in the market because we have decreasing marginal utility towards money. If we didn’t and all we wanted was the highest expected value (which is how we should optimize for charity), then we would put our money into the stock that is likeliest to make us the most money.
In other words, I don’t want to have a minimum number of lives saved—I want to maximize the number of lives I save.
Perhaps I didn’t express my point clearly enough. In fact, I’m certain of it. But I more trying to express that there is some element of risk in a charity. Perhaps there is a probability is corrupt, etc. and isn’t as efficient as it’s rated as. A better example is likely this:
Assuming it will succeed, the SIAI is pretty unarguably the most important charity in existence. But that’s a huge assumption, and it makes some sense to hedge against that bet and distribute money to other charities as well.
But you have no reason to be risk averse about purely altruistically motivated donations. A 50% chance to do some good is just as altruistically desirable as a 100% chance to do half as much good (ignoring changes in marginal utility or including them in “X as much good”).
I tend to agree with you. But many people are risk averse and would prefer the latter to the former, and I’m not necessarily sure you can say that’s wrong per se; it’s just a different utility function. What you can say is that that methodology leads to non Pareto-optimum results in some cases.
It’s either “wrong” (irrational) or not purely altruistic. Of course even a just mostly altruistic donations can do a lot of good and should be encouraged rather than chastising the donors for being selfish or irrational, but that doesn’t change the facts about what would constitute rational altruistic behavior.
I think you’re using “rationality” the way Rand did. There are plenty of ways things can be wrong without being irrational, and there are plenty of ways to be irrational without being wrong. Wrong vs. Irrational is quite often a debate about terminal values. In this case, your terminal value is maximizing good regardless of probability. In the hypothetical example, the person has a terminal value of risk aversion, which, agree or not, is a terminal value that many many humans have.
I think you’re misusing “terminal values” here. Risk aversion is a pretty stable feature of people’s revealed preferences, so I can sort of see why you’re putting it into that category; but I’d characterize it more as a heuristic people use in the absence of good intuitions about long-term behavior than as a terminal value in its own right. If you offered people the chance to become less risk-averse in a context where that demonstrably leads to improved outcomes relative to other preferences, I’d bet they’d take it, and be reflectively consistent in doing so. People try to do that in weak ways all the time, in fact; it’s a fixture of self-help literature.
Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
Aside from whether risk aversion can usefully be considered a terminal value such a terminal value risk aversion cannot possibly be a purely altruistic value because it’s only noticeably risk averse with respect to that particular donor, not with respect to the beneficiaries (unless your individual donation constitutes a significant fraction of the total funds for a particular cause).
Fair point. I agree. At least I did once I figured out what that sentence was actually saying ;-). I was just trying to offer a potential explanation for ChrisHallquist’s actions.
I once donated to ACLU. I now receive “final renewal” notices from them every month along with calls to action and other mail. I should calculate how much these cost them to create, print, and mail, so I can determine when they’ve spent my entire donation on marketing to me.
I’m still in college so right now all my money is going towards school. When I finish I plan on donating everything I can. The charity I think is best right now is Schistosomiasis Control Initiative.
Why primarily? If it’s best to send some of your donations there, wouldn’t it be best to send all of them?
A common trait amongst humans is the desire to accumulate warm feelings. Optimizing for warm feelings is rarely accomplished by donating to a single charity.
Also, donating to several charities rather than only one offers a different array of signaling benefits.
I don’t think he’d post on here in hopes of getting people to donate more so we can all get warm feelings.
I doubt posting here about donating to multiple charities signals anything good.
telling people they get warm feelings by donating to multiple charities is a good way to make more charitable donation happen.
True, but I don’t think donations really do much if your goal isn’t to do something. Better to donate $1 to the Fred Hollows foundation than to donate $10,000 to the Seeing Eye foundation. (The first gives cataract surgeries for one 20,000th the price the second gives guide dogs.) Also, that wouldn’t explain why he implied he gives to multiple charities. It would only explain him suggesting to other people to do it.
How is that the price ratio? Are the cataract doctors all volunteering their time? Are dogs astronomically more expensive than I think they are, or are there no volunteer seeing eye dog trainers to be had...?
The cataract doctors are in third-world countries where labor is cheap.
One could also look at it as hedging one’s bets, just as one would (typically) not put all of one’s money into one (boy is this getting confusing) stock in the market. Admittedly, charities aren’t really the same type of risk.
Question for downvoters: I acknowledge that RobertLumley is making an incorrect point. However, is he so incorrect as to deserve this many downvotes?
I think I’m the victim of some kind of karmassassination here. All recent comments I’ve made, even largely upvoted ones have been wildly downvoted in the last hour for some reason.
(I’ve lost about 110 karma in the last hour, all of it on my last 10 or so comments, many of which hadn’t been voted on for about a week.)
Given your cries of victimhood I think that you would do well to read this timely thread:
http://lesswrong.com/lw/9b/help_help_im_being_oppressed/
If you have a better explanation as to why I might lose over 100 karma spaced over nine comments (four of which haven’t been voted on in four days until now) in three different threads in about an hour, I would like to hear it.
Losing karma is hardly real oppression. Nonetheless, your comment seems rather inappropriate.
One wouldn’t put all of one’s money into one stock in the market because we have decreasing marginal utility towards money. If we didn’t and all we wanted was the highest expected value (which is how we should optimize for charity), then we would put our money into the stock that is likeliest to make us the most money.
In other words, I don’t want to have a minimum number of lives saved—I want to maximize the number of lives I save.
Perhaps I didn’t express my point clearly enough. In fact, I’m certain of it. But I more trying to express that there is some element of risk in a charity. Perhaps there is a probability is corrupt, etc. and isn’t as efficient as it’s rated as. A better example is likely this:
Assuming it will succeed, the SIAI is pretty unarguably the most important charity in existence. But that’s a huge assumption, and it makes some sense to hedge against that bet and distribute money to other charities as well.
But you have no reason to be risk averse about purely altruistically motivated donations. A 50% chance to do some good is just as altruistically desirable as a 100% chance to do half as much good (ignoring changes in marginal utility or including them in “X as much good”).
I tend to agree with you. But many people are risk averse and would prefer the latter to the former, and I’m not necessarily sure you can say that’s wrong per se; it’s just a different utility function. What you can say is that that methodology leads to non Pareto-optimum results in some cases.
It’s either “wrong” (irrational) or not purely altruistic. Of course even a just mostly altruistic donations can do a lot of good and should be encouraged rather than chastising the donors for being selfish or irrational, but that doesn’t change the facts about what would constitute rational altruistic behavior.
I think you’re using “rationality” the way Rand did. There are plenty of ways things can be wrong without being irrational, and there are plenty of ways to be irrational without being wrong. Wrong vs. Irrational is quite often a debate about terminal values. In this case, your terminal value is maximizing good regardless of probability. In the hypothetical example, the person has a terminal value of risk aversion, which, agree or not, is a terminal value that many many humans have.
I think you’re misusing “terminal values” here. Risk aversion is a pretty stable feature of people’s revealed preferences, so I can sort of see why you’re putting it into that category; but I’d characterize it more as a heuristic people use in the absence of good intuitions about long-term behavior than as a terminal value in its own right. If you offered people the chance to become less risk-averse in a context where that demonstrably leads to improved outcomes relative to other preferences, I’d bet they’d take it, and be reflectively consistent in doing so. People try to do that in weak ways all the time, in fact; it’s a fixture of self-help literature.
Well the reason I’d call it a terminal value is that if you asked people whether they would save 50 lives with 100% probability or 100 lives with 50% probability, people would tend to pick the former. When pressed why, they wouldn’t really have an explanation, other than that they value not taking risks.
Sure, but you could generate a scenario like that for just about any well-defined cognitive bias: it’s perilously close to the definition of bias, in fact. That doesn’t necessarily mean biases are inextricably incorporated into our value system, unless you’re defining human values purely in terms of revealed preferences—in which case why bother talking about this stuff at all?
I’m sorry for continuing this, because I feel like I’m just not getting why I’m wrong and we’re going in circles. And while I’m fairly confident that some of the downvoting is grudge based some of it is not, and was here before this happened.
How are you defining terminal values? EY defined it as values that “are desirable without conditioning on other consequences”. It seems to me that regardless of the things are, if you value things you have (or sure things) more than potential future things, that would qualify as a terminal value.
I haven’t been downvoting you, for what it’s worth.
Anyway, I think our disagreement revolves around different interpretations of desirable in that quote (I think that definition’s a little loose, incidentally, but that doesn’t seem to be problematic here). You seem to be defining it as based on choice: a world-state is desirable relative to another if an agent would choose it over the other given the opportunity. That’s pretty close to the thinking in economics among other disciplines, hence why I’ve been talking so much about revealed preference.
The problem is that we often choose things that turn out in retrospect to have been served our needs poorly. With that in mind I’m inclined to think of terminal values as irreducible terms in a utility function: features of future world-state that have a direct impact on an agent’s well-being (a loose term, but hopefully an understandable one), and which can’t be expressed in terms of more fundamental features. (There might be more than one decomposition of values here, in which case we should prefer the simplest one.)
That’s fundamentally choice-agnostic, although elective concordance with outcomes might turn out to be such a term. Irrational risk aversion (though risk aversion can be rational, taking into account the limitations of foresight!) and other cognitive biases are features of choice, not of utility: if they worked on utility directly, we wouldn’t call them biases.
By way of disclaimer, though, I should probably mention that this model isn’t a perfect one when applied to humans: we don’t seem to follow the VNM axioms consistently, so we can’t be said to have utility functions in the strict sense. Some features of our cognition seem to behave similarly within certain bounds, though, and it’s those that I’m focusing on above.
Excellently put, I think that sums up our disagreement very accurately. I’m not sure risk aversion couldn’t be expressed as an irreducible term in a utility function, though. I suppose it would be more of a trait of the utility function, such as all probabilities are raised to a power greater than one, or something.
Aside from whether risk aversion can usefully be considered a terminal value such a terminal value risk aversion cannot possibly be a purely altruistic value because it’s only noticeably risk averse with respect to that particular donor, not with respect to the beneficiaries (unless your individual donation constitutes a significant fraction of the total funds for a particular cause).
Fair point. I agree. At least I did once I figured out what that sentence was actually saying ;-). I was just trying to offer a potential explanation for ChrisHallquist’s actions.
This would not be a good time to hedge one’s bets. I was largely asking to correct him if he did something wrong like that.
Part of me likes the idea of donating tens of dollars to the ACLU or similar cause as a symbolic gesture.
I once donated to ACLU. I now receive “final renewal” notices from them every month along with calls to action and other mail. I should calculate how much these cost them to create, print, and mail, so I can determine when they’ve spent my entire donation on marketing to me.