Note that Holden qualified his statement with “(too often)”.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it’s ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I’ve just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.
I don’t see what you’re driving at with the opportunity cost of X/10. Either we have less than $1,100,000 in which case the opportunity cost is X or we have more than $1,100,000 in which case it is zero. Either we can do X or we can’t, we can’t do part of it or more of it.
The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman’s comment here.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
See my fourth point in the section titled “In favor of a local approach to philanthropy” here.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
It’s not humanly possible to take all factors into account; our brains aren’t designed to do so.
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
implicit knowledge which is inexplicable
A cognitive bias by another name is still a cognitive bias.
My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.
I agree that it isn’t very important. Regardless of anything else, the possibility of more than a tiny proportion of donors actually applying EEV is not even remotely on the table.
I can’t tell whether we’re in disagreement or talking past each other.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
You seem to be confusing EEV with expected value maximization. It’s clear from the mathematical definition of expected value that expected value maximization does this just often enough. It’s not at all tautological that EEV does it just often enough.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
[...]
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
Our evolutionarily conditioned responses and what we’ve gleaned from learning algorithms are not designed for optimal philanthropy but are nevertheless be substantially more powerful and/or relevant than explicit reasoning in some domains relevant to optimal philanthropy.
A cognitive bias by another name is still a cognitive bias.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias. Attempting to remove cognitive biases one at a time need not result in monotonic improvement. See Phil Goetz’s Reason as a memetic immune disorder.
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
I think you’ve found the source of our disagreement here. I fully agree with the use of time-saving heuristics, I think the difference is that I want all my heuristics to ultimately be explicitly justified, not necessarily every time you use them, but at least once.
Knowing the reason for a heuristic is useful, it can help you refine it, it can tell you whether or not its safe to abandon it in certain situations, and sometimes it can alert you to one heuristic that really is just a bias. To continue with your example, I agree that checking whether it would be smart to take your hand of the cooker every single time is stupid, but I don’t see what’s wrong with at some point pausing for a moment, just to consider whether there might be unforeseen benefits to keeping your hand on the cooker (to my knowledge there aren’t).
An analogy can be made to mathematics, you don’t explicitly prove everything from the axioms, but you rely on established results which in turn rest on others and hopefully trace back to the axioms eventually, if that’s not the case you start to worry.
As a second point, time saving heuristics are at their most useful when time matters. For instance, if I had to choose a new charity every day, or if for some reason I only had ten minutes to choose one and my choice would then be set in stone for eternity, then time saving heuristics would be the order of the day. As I need only choose one, and can safely take days or even weeks to make that decision without harming the charities in any significant way, and furthermore can change my choice whenever I want if new information comes to light, it seems like the use of time-savers would be pure laziness on my part, and tha’ts just for me as an individual, for an organisation like Givewell which exists solely to perform this one task, they are inexcusable.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias.
It can be beneficial, but not predictably so. If I know that I possess cognitive bias A, it is better to try to get rid of it than to introduce a second cognitive bias B.
Attempting to remove cognitive biases one at a time need not result in monotonic improvement.
Agreed, but it should result in improvement on average. Once again we come back to the issue of uncertainty aversion, whether its worthwhile to gamble when the odds are in your favour.
See Phil Goetz’s Reason as a memetic immune disorder.
I loved that when I first read it, but lately I’m unsure. If his hypothesis is correct, it would suggest that most religions are completely harmless in their ‘natural environment’, but excluding the last few centuries that doesn’t seem true.
The heuristics that we use are too numerous and of too much complexity to be possible to explicitly justify all of them. Turning your mathematics analogy on its head, note that mathematicians have very little knowledge of the heuristics that they use to discover and prove theorems. Poincare wrote some articles about this; if interested see The Value of Science.
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
As for for SIAI vs. VillageReach, it may well be that SIAI is a better fit for your values than VillageReach is. I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin. I have been long been hoping for GiveWell to research x-risk charities. See my comment here. Over the next year I’ll be researching x-risk reduction charities myself.
It’s not clear to me that overcoming a generic bias should improve one’s rationality on average. This is an empirical question with no data but anecdotal evidence. Placebo effect and selection bias may suffice to explain a subjective sense that overcoming biases is conducive to rationality. Anyway, on the matter at hand, I concur with Holden’s view that relying entirely on explicit formulas does not maximize expected value and that one should incorporate some measure of subjective judgment (as to how much, I am undecided).
I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin.
Interesting. Have you explained these beliefs anywhere?
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
Aren’t the numbers here a little specious? There may be over a million charities (is this including nonprofits which run social clubs? there are a lot of categories of nonprofits), but we can dismiss hundreds of thousands with just a cursory examination of their goals or their activity level. For example, could any sports-related charity come within an order of magnitude or two of a random GiveWell approved charity? Could any literary (or heck humanities charity) do that without specious Pascal’s Wager-type arguments?
This isn’t heuristic, this is simply the nature of the game. Some classes of activities just aren’t very useful from the utilitarian perspective. (Imagine Christianity approved of moving piles of sand with tweezers and hence there were a few hundred thousand charities surrounding this activity—every town or city has a charity or three providing subsidized sand pits and sand scholarships. If a GiveWell dismissed them all out of hand, would you attack that too as a heuristic?)
Notice the two examples you picked—deworming and bed nets. Both are already highly similar: public health measures. You didn’t pick, ‘buy new pews for the local church’ and ‘deworm African kids’.
This looks a lot like a heuristic to me. Is “heuristic” derogative around here?
Yes; heuristics allow errors and are suboptimal in many respects. (That’s why they are a ‘heuristic’ and not ‘the optimal algorithm’ or ‘the right answer’ or other such phrases.)
I don’t cite the sand mandalas both because they simply didn’t come to mind, and they’re quite beautiful.
I agree with most of what you say here, but fear that the discussion is veering in the direction of a semantics dispute. So I’ll just clarify my position by saying:
• Constructing an airtight argument for the relative lack of utilitarian value of e.g. all humanities charities relative to VillageReach is a nontrivial task (and indeed, may be impossible).
• Even if one limits oneself to the consideration of 10^(-4) of the field of all charities, one is still left with a very sizable analytical problem.
•The use of time saving heuristics is essential to getting anything valuable done.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
Note that in the link that you’re referring to I argue both for and against local philanthropy as opposed to global philanthropy. Anyway, I wasn’t referencing the post as a whole, I was referencing the point about the “act locally” heuristic solving a coordination problem that naive EEV fails to solve. It’s not clear that it’s humanly possible (or desirable) to derive that that heuristic from first principles. Rather than trying to replace naive EEV with sophisticated EEV; one might be better off with scraping exclusive use of EEV altogether.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
I don’t see what you’re driving at with the opportunity cost of X/10. Either we have less than $1,100,000 in which case the opportunity cost is X or we have more than $1,100,000 in which case it is zero. Either we can do X or we can’t, we can’t do part of it or more of it.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
A cognitive bias by another name is still a cognitive bias.
I agree that it isn’t very important. Regardless of anything else, the possibility of more than a tiny proportion of donors actually applying EEV is not even remotely on the table.
I can’t tell whether we’re in disagreement or talking past each other.
You seem to be confusing EEV with expected value maximization. It’s clear from the mathematical definition of expected value that expected value maximization does this just often enough. It’s not at all tautological that EEV does it just often enough.
[...]
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
Our evolutionarily conditioned responses and what we’ve gleaned from learning algorithms are not designed for optimal philanthropy but are nevertheless be substantially more powerful and/or relevant than explicit reasoning in some domains relevant to optimal philanthropy.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias. Attempting to remove cognitive biases one at a time need not result in monotonic improvement. See Phil Goetz’s Reason as a memetic immune disorder.
I think you’ve found the source of our disagreement here. I fully agree with the use of time-saving heuristics, I think the difference is that I want all my heuristics to ultimately be explicitly justified, not necessarily every time you use them, but at least once.
Knowing the reason for a heuristic is useful, it can help you refine it, it can tell you whether or not its safe to abandon it in certain situations, and sometimes it can alert you to one heuristic that really is just a bias. To continue with your example, I agree that checking whether it would be smart to take your hand of the cooker every single time is stupid, but I don’t see what’s wrong with at some point pausing for a moment, just to consider whether there might be unforeseen benefits to keeping your hand on the cooker (to my knowledge there aren’t).
An analogy can be made to mathematics, you don’t explicitly prove everything from the axioms, but you rely on established results which in turn rest on others and hopefully trace back to the axioms eventually, if that’s not the case you start to worry.
As a second point, time saving heuristics are at their most useful when time matters. For instance, if I had to choose a new charity every day, or if for some reason I only had ten minutes to choose one and my choice would then be set in stone for eternity, then time saving heuristics would be the order of the day. As I need only choose one, and can safely take days or even weeks to make that decision without harming the charities in any significant way, and furthermore can change my choice whenever I want if new information comes to light, it seems like the use of time-savers would be pure laziness on my part, and tha’ts just for me as an individual, for an organisation like Givewell which exists solely to perform this one task, they are inexcusable.
It can be beneficial, but not predictably so. If I know that I possess cognitive bias A, it is better to try to get rid of it than to introduce a second cognitive bias B.
Agreed, but it should result in improvement on average. Once again we come back to the issue of uncertainty aversion, whether its worthwhile to gamble when the odds are in your favour.
I loved that when I first read it, but lately I’m unsure. If his hypothesis is correct, it would suggest that most religions are completely harmless in their ‘natural environment’, but excluding the last few centuries that doesn’t seem true.
Thanks for engaging with me.
The heuristics that we use are too numerous and of too much complexity to be possible to explicitly justify all of them. Turning your mathematics analogy on its head, note that mathematicians have very little knowledge of the heuristics that they use to discover and prove theorems. Poincare wrote some articles about this; if interested see The Value of Science.
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
As for for SIAI vs. VillageReach, it may well be that SIAI is a better fit for your values than VillageReach is. I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin. I have been long been hoping for GiveWell to research x-risk charities. See my comment here. Over the next year I’ll be researching x-risk reduction charities myself.
It’s not clear to me that overcoming a generic bias should improve one’s rationality on average. This is an empirical question with no data but anecdotal evidence. Placebo effect and selection bias may suffice to explain a subjective sense that overcoming biases is conducive to rationality. Anyway, on the matter at hand, I concur with Holden’s view that relying entirely on explicit formulas does not maximize expected value and that one should incorporate some measure of subjective judgment (as to how much, I am undecided).
Interesting. Have you explained these beliefs anywhere?
No. I’ll try to explicate my thoughts soon. Thanks for asking.
Aren’t the numbers here a little specious? There may be over a million charities (is this including nonprofits which run social clubs? there are a lot of categories of nonprofits), but we can dismiss hundreds of thousands with just a cursory examination of their goals or their activity level. For example, could any sports-related charity come within an order of magnitude or two of a random GiveWell approved charity? Could any literary (or heck humanities charity) do that without specious Pascal’s Wager-type arguments?
This isn’t heuristic, this is simply the nature of the game. Some classes of activities just aren’t very useful from the utilitarian perspective. (Imagine Christianity approved of moving piles of sand with tweezers and hence there were a few hundred thousand charities surrounding this activity—every town or city has a charity or three providing subsidized sand pits and sand scholarships. If a GiveWell dismissed them all out of hand, would you attack that too as a heuristic?)
Notice the two examples you picked—deworming and bed nets. Both are already highly similar: public health measures. You didn’t pick, ‘buy new pews for the local church’ and ‘deworm African kids’.
This looks a lot like a heuristic to me. Is “heuristic” derogative around here?
Why not go with the real-world version? (Especially since it involves ritual destruction of those piles of sand.)
Yes; heuristics allow errors and are suboptimal in many respects. (That’s why they are a ‘heuristic’ and not ‘the optimal algorithm’ or ‘the right answer’ or other such phrases.)
I don’t cite the sand mandalas both because they simply didn’t come to mind, and they’re quite beautiful.
I agree with most of what you say here, but fear that the discussion is veering in the direction of a semantics dispute. So I’ll just clarify my position by saying:
• Constructing an airtight argument for the relative lack of utilitarian value of e.g. all humanities charities relative to VillageReach is a nontrivial task (and indeed, may be impossible).
• Even if one limits oneself to the consideration of 10^(-4) of the field of all charities, one is still left with a very sizable analytical problem.
•The use of time saving heuristics is essential to getting anything valuable done.
Note that in the link that you’re referring to I argue both for and against local philanthropy as opposed to global philanthropy. Anyway, I wasn’t referencing the post as a whole, I was referencing the point about the “act locally” heuristic solving a coordination problem that naive EEV fails to solve. It’s not clear that it’s humanly possible (or desirable) to derive that that heuristic from first principles. Rather than trying to replace naive EEV with sophisticated EEV; one might be better off with scraping exclusive use of EEV altogether.