I have a number of issues with your criticisms of EEV
In such a world, when people decided that a particular endeavor/action had outstandingly high EEV, there would (too often) be no justification for costly skeptical inquiry of this endeavor/action.
I’m not sure this is true, sceptical inquiry can have a high expected value when it helps you work out what is a better use of limited resources. In particular, my maths might be wrong, but I think that in a case with an action that has low probability of producing a large gain, any investigation that will confirm whether this is true or not is worth attempting unless either:
The sceptical enquiry will actually cost more, on its own, than just going ahead and performing the action.
The sceptical enquiry will cost so much that having done it, if it turns out the action is worth doing, you will no longer be able to afford to do it.
It seems to me that in both of these cases it would be pretty obviously stupid to have a sceptical enquiry.
In such a world, it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about, rather than helping themselves, their families and their communities. I believe that the world would be worse off if people behaved in this way, or at least if they took it to an extreme.
Why do you believe this. Do you have any evidence or even arguments? It seems pretty unintuitive to me that the sum of a bunch of actions, each of which increases total welfare, could somehow be a decrease in total welfare.
When you say taken to the extreme, I suspect you are imagining our hypothetical EEV agents ignoring various side-effects of their actions, in which case the problem is with them failing to take all factors into account, rather than with them using EEV.
Related: giving based on EEV seems to create bad incentives. EEV doesn’t seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is. Therefore, in a world in which most donors used EEV to give, charities would have every incentive to announce that they were focusing on the highest expected-value programs, without disclosing any details of their operations that might show they were achieving less value than theoretical estimates said they ought to be.
Not true. If all donors followed EEV, charities would indeed have an incentive to conceal information about things they are doing badly, and donors would in turn, and in accordance with EEV, start to treat failure to disclose information as evidence that the information was unflattering. This would in turn incentivise charities to disclose information about things they are doing slightly badly, which would in turn cause donors to view secrecy in an even worse light, and so on. I we eventually reach an equilibrium where charities disclose all information.
Of course, this assumes that all charities and all donors are completely rational, which is a total fantasy, but I think the same can be said of your own argument, and even if we do end up stuck part-way to equilibrium with charities keeping some information secret, as donors we can just take that information into account and correctly treat it as Bayesian evidence of a problem.
Up till now I have been donating to Village Reach rather than SIAI based on Givewell’s advice, but if I am to treat this article as evidence of your thought process in general, then I don’t like what I see and I may well change my mind.
Having said that, I do like the maths. I’m just not at all sure that it in any way contradicts EEV.
I’m not sure this is true, sceptical inquiry can have a high expected value when it helps you work out what is a better use of limited resources. [...]
Note that Holden qualified his statement with “(too often)”.
I think that in a case with an action that has low probability of producing a large gain, any investigation that will confirm whether this is true or not is worth attempting unless either [...] It seems to me that in both of these cases it would be pretty obviously stupid to have a sceptical enquiry.
Concerning your second point: suppose that spending a million dollars on intervention A ostensibly has an expected value X which is many orders of magnitude greater than that of any intervention (and suppose for simplicity negligible diminishing marginal utility per dollar). Suppose that it would cost $100,000 to investigate whether the ostensibly high expected value is well-grounded.
Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it’s ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I’ve just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.
Note that a similar situation could prevail even if investigating the intervention cost only $100 or $10 provided that the ostensible expected value X is sufficiently high relative to other known options.
The point that I’m driving at here is that there’s not a binary “can afford” or “can’t afford” distinction concerning the possibility of funding A: it can easily happen that spending any resources whatsoever investigating A is ostensibly too costly to be worthwhile. This conclusion is counter-intuitive; seemingly very similar to Pascal Mugging.
The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman’s comment here.
Why do you believe this. Do you have any evidence or even arguments? It seems pretty unintuitive to me that the sum of a bunch of actions, each of which increases total welfare, could somehow be a decrease in total welfare.
See my fourth point in the section titled “In favor of a local approach to philanthropy” here.
When you say taken to the extreme, I suspect you are imagining our hypothetical EEV agents ignoring various side-effects of their actions, in which case the problem is with them failing to take all factors into account, rather than with them using EEV.
It’s not humanly possible to take all factors into account; our brains aren’t designed to do so. Given how the human brain is structured, using implicit knowledge which is inexplicable can yield better decision making for humans than using explicit knowledge. This is the point of the section of Holden’s post titled “Generalizing the Bayesian approach.”
Not true. If all donors followed EEV, charities would indeed have an incentive to conceal information about things they are doing badly, and donors would in turn, and in accordance with EEV, start to treat failure to disclose information as evidence that the information was unflattering. This would in turn incentivise charities to disclose information about things they are doing slightly badly, which would in turn cause donors to view secrecy in an even worse light, and so on. I we eventually reach an equilibrium where charities disclose all information.
I think you’re right about this.
Of course, this assumes that all charities and all donors are completely rational, which is a total fantasy, but I think the same can be said of your own argument, and even if we do end up stuck part-way to equilibrium with charities keeping some information secret, as donors we can just take that information into account and correctly treat it as Bayesian evidence of a problem.
My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.
Note that Holden qualified his statement with “(too often)”.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it’s ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I’ve just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.
I don’t see what you’re driving at with the opportunity cost of X/10. Either we have less than $1,100,000 in which case the opportunity cost is X or we have more than $1,100,000 in which case it is zero. Either we can do X or we can’t, we can’t do part of it or more of it.
The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman’s comment here.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
See my fourth point in the section titled “In favor of a local approach to philanthropy” here.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
It’s not humanly possible to take all factors into account; our brains aren’t designed to do so.
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
implicit knowledge which is inexplicable
A cognitive bias by another name is still a cognitive bias.
My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.
I agree that it isn’t very important. Regardless of anything else, the possibility of more than a tiny proportion of donors actually applying EEV is not even remotely on the table.
I can’t tell whether we’re in disagreement or talking past each other.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
You seem to be confusing EEV with expected value maximization. It’s clear from the mathematical definition of expected value that expected value maximization does this just often enough. It’s not at all tautological that EEV does it just often enough.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
[...]
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
Our evolutionarily conditioned responses and what we’ve gleaned from learning algorithms are not designed for optimal philanthropy but are nevertheless be substantially more powerful and/or relevant than explicit reasoning in some domains relevant to optimal philanthropy.
A cognitive bias by another name is still a cognitive bias.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias. Attempting to remove cognitive biases one at a time need not result in monotonic improvement. See Phil Goetz’s Reason as a memetic immune disorder.
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
I think you’ve found the source of our disagreement here. I fully agree with the use of time-saving heuristics, I think the difference is that I want all my heuristics to ultimately be explicitly justified, not necessarily every time you use them, but at least once.
Knowing the reason for a heuristic is useful, it can help you refine it, it can tell you whether or not its safe to abandon it in certain situations, and sometimes it can alert you to one heuristic that really is just a bias. To continue with your example, I agree that checking whether it would be smart to take your hand of the cooker every single time is stupid, but I don’t see what’s wrong with at some point pausing for a moment, just to consider whether there might be unforeseen benefits to keeping your hand on the cooker (to my knowledge there aren’t).
An analogy can be made to mathematics, you don’t explicitly prove everything from the axioms, but you rely on established results which in turn rest on others and hopefully trace back to the axioms eventually, if that’s not the case you start to worry.
As a second point, time saving heuristics are at their most useful when time matters. For instance, if I had to choose a new charity every day, or if for some reason I only had ten minutes to choose one and my choice would then be set in stone for eternity, then time saving heuristics would be the order of the day. As I need only choose one, and can safely take days or even weeks to make that decision without harming the charities in any significant way, and furthermore can change my choice whenever I want if new information comes to light, it seems like the use of time-savers would be pure laziness on my part, and tha’ts just for me as an individual, for an organisation like Givewell which exists solely to perform this one task, they are inexcusable.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias.
It can be beneficial, but not predictably so. If I know that I possess cognitive bias A, it is better to try to get rid of it than to introduce a second cognitive bias B.
Attempting to remove cognitive biases one at a time need not result in monotonic improvement.
Agreed, but it should result in improvement on average. Once again we come back to the issue of uncertainty aversion, whether its worthwhile to gamble when the odds are in your favour.
See Phil Goetz’s Reason as a memetic immune disorder.
I loved that when I first read it, but lately I’m unsure. If his hypothesis is correct, it would suggest that most religions are completely harmless in their ‘natural environment’, but excluding the last few centuries that doesn’t seem true.
The heuristics that we use are too numerous and of too much complexity to be possible to explicitly justify all of them. Turning your mathematics analogy on its head, note that mathematicians have very little knowledge of the heuristics that they use to discover and prove theorems. Poincare wrote some articles about this; if interested see The Value of Science.
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
As for for SIAI vs. VillageReach, it may well be that SIAI is a better fit for your values than VillageReach is. I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin. I have been long been hoping for GiveWell to research x-risk charities. See my comment here. Over the next year I’ll be researching x-risk reduction charities myself.
It’s not clear to me that overcoming a generic bias should improve one’s rationality on average. This is an empirical question with no data but anecdotal evidence. Placebo effect and selection bias may suffice to explain a subjective sense that overcoming biases is conducive to rationality. Anyway, on the matter at hand, I concur with Holden’s view that relying entirely on explicit formulas does not maximize expected value and that one should incorporate some measure of subjective judgment (as to how much, I am undecided).
I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin.
Interesting. Have you explained these beliefs anywhere?
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
Aren’t the numbers here a little specious? There may be over a million charities (is this including nonprofits which run social clubs? there are a lot of categories of nonprofits), but we can dismiss hundreds of thousands with just a cursory examination of their goals or their activity level. For example, could any sports-related charity come within an order of magnitude or two of a random GiveWell approved charity? Could any literary (or heck humanities charity) do that without specious Pascal’s Wager-type arguments?
This isn’t heuristic, this is simply the nature of the game. Some classes of activities just aren’t very useful from the utilitarian perspective. (Imagine Christianity approved of moving piles of sand with tweezers and hence there were a few hundred thousand charities surrounding this activity—every town or city has a charity or three providing subsidized sand pits and sand scholarships. If a GiveWell dismissed them all out of hand, would you attack that too as a heuristic?)
Notice the two examples you picked—deworming and bed nets. Both are already highly similar: public health measures. You didn’t pick, ‘buy new pews for the local church’ and ‘deworm African kids’.
This looks a lot like a heuristic to me. Is “heuristic” derogative around here?
Yes; heuristics allow errors and are suboptimal in many respects. (That’s why they are a ‘heuristic’ and not ‘the optimal algorithm’ or ‘the right answer’ or other such phrases.)
I don’t cite the sand mandalas both because they simply didn’t come to mind, and they’re quite beautiful.
I agree with most of what you say here, but fear that the discussion is veering in the direction of a semantics dispute. So I’ll just clarify my position by saying:
• Constructing an airtight argument for the relative lack of utilitarian value of e.g. all humanities charities relative to VillageReach is a nontrivial task (and indeed, may be impossible).
• Even if one limits oneself to the consideration of 10^(-4) of the field of all charities, one is still left with a very sizable analytical problem.
•The use of time saving heuristics is essential to getting anything valuable done.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
Note that in the link that you’re referring to I argue both for and against local philanthropy as opposed to global philanthropy. Anyway, I wasn’t referencing the post as a whole, I was referencing the point about the “act locally” heuristic solving a coordination problem that naive EEV fails to solve. It’s not clear that it’s humanly possible (or desirable) to derive that that heuristic from first principles. Rather than trying to replace naive EEV with sophisticated EEV; one might be better off with scraping exclusive use of EEV altogether.
I have a number of issues with your criticisms of EEV
I’m not sure this is true, sceptical inquiry can have a high expected value when it helps you work out what is a better use of limited resources. In particular, my maths might be wrong, but I think that in a case with an action that has low probability of producing a large gain, any investigation that will confirm whether this is true or not is worth attempting unless either:
The sceptical enquiry will actually cost more, on its own, than just going ahead and performing the action.
The sceptical enquiry will cost so much that having done it, if it turns out the action is worth doing, you will no longer be able to afford to do it.
It seems to me that in both of these cases it would be pretty obviously stupid to have a sceptical enquiry.
Why do you believe this. Do you have any evidence or even arguments? It seems pretty unintuitive to me that the sum of a bunch of actions, each of which increases total welfare, could somehow be a decrease in total welfare.
When you say taken to the extreme, I suspect you are imagining our hypothetical EEV agents ignoring various side-effects of their actions, in which case the problem is with them failing to take all factors into account, rather than with them using EEV.
Not true. If all donors followed EEV, charities would indeed have an incentive to conceal information about things they are doing badly, and donors would in turn, and in accordance with EEV, start to treat failure to disclose information as evidence that the information was unflattering. This would in turn incentivise charities to disclose information about things they are doing slightly badly, which would in turn cause donors to view secrecy in an even worse light, and so on. I we eventually reach an equilibrium where charities disclose all information.
Of course, this assumes that all charities and all donors are completely rational, which is a total fantasy, but I think the same can be said of your own argument, and even if we do end up stuck part-way to equilibrium with charities keeping some information secret, as donors we can just take that information into account and correctly treat it as Bayesian evidence of a problem.
Up till now I have been donating to Village Reach rather than SIAI based on Givewell’s advice, but if I am to treat this article as evidence of your thought process in general, then I don’t like what I see and I may well change my mind.
Having said that, I do like the maths. I’m just not at all sure that it in any way contradicts EEV.
Note that Holden qualified his statement with “(too often)”.
Concerning your second point: suppose that spending a million dollars on intervention A ostensibly has an expected value X which is many orders of magnitude greater than that of any intervention (and suppose for simplicity negligible diminishing marginal utility per dollar). Suppose that it would cost $100,000 to investigate whether the ostensibly high expected value is well-grounded.
Then investigating the cost-effectiveness of intervention A comes at an ostensible opportunity cost of X/10. But it’s ostensibly the case that the remaining $900,000 could in no case be spent with cost-effectiveness within an order of magnitude then spending the money on intervention A. So in the setting that I’ve just described, the opportunity cost of investigating is ostensibly too high to justify an investigation.
Note that a similar situation could prevail even if investigating the intervention cost only $100 or $10 provided that the ostensible expected value X is sufficiently high relative to other known options.
The point that I’m driving at here is that there’s not a binary “can afford” or “can’t afford” distinction concerning the possibility of funding A: it can easily happen that spending any resources whatsoever investigating A is ostensibly too costly to be worthwhile. This conclusion is counter-intuitive; seemingly very similar to Pascal Mugging.
The fact that naive EEV leads to this conclusion is evidence against the value of naive EEV. Of course, one can attempt to use a more sophisticated version of EEV; see the second and third paragraphs of Carl Shulman’s comment here.
See my fourth point in the section titled “In favor of a local approach to philanthropy” here.
It’s not humanly possible to take all factors into account; our brains aren’t designed to do so. Given how the human brain is structured, using implicit knowledge which is inexplicable can yield better decision making for humans than using explicit knowledge. This is the point of the section of Holden’s post titled “Generalizing the Bayesian approach.”
I think you’re right about this.
My intuition is that in the real world the incentive effects of using EEV would in fact be bad despite the point that you raise; but refining and articulating my intuition here would take some time and in any case is oblique to the primary matters under consideration.
And the point which I was making was that EEV does not do this too often, it does it just often enough, which I think is pretty clear mathematically.
I don’t see what you’re driving at with the opportunity cost of X/10. Either we have less than $1,100,000 in which case the opportunity cost is X or we have more than $1,100,000 in which case it is zero. Either we can do X or we can’t, we can’t do part of it or more of it.
If naive EEV causes problems then the problem is with naivete, not with EEV. Any decision procedure can lead to stupid actions if fed with stupid information.
You make the case that local philanthropy is better than global philanthropy on an individual basis, and if you are correct (which I don’t think you are) then EEV would choose to engage in local philanthropy.
The correct response to our fallibility is not to go do random other things. Just because my best guess might be wrong doesn’t mean I should trade it for my second best guess, which is by definition even more likely to be wrong.
A cognitive bias by another name is still a cognitive bias.
I agree that it isn’t very important. Regardless of anything else, the possibility of more than a tiny proportion of donors actually applying EEV is not even remotely on the table.
I can’t tell whether we’re in disagreement or talking past each other.
You seem to be confusing EEV with expected value maximization. It’s clear from the mathematical definition of expected value that expected value maximization does this just often enough. It’s not at all tautological that EEV does it just often enough.
[...]
Nobody’s arguing against expected value maximization. The claim made in the final section of the post and by myself is that using explicit expected value maximization does not maximize expected value and that one can do better by mixing explicit expected value maximization with heuristics.
To see how this could be so, consider the case of finding the optimal way to act if one’s hand is on a very hot surface. We have an evolutionarily ingrained response of jerking our hand away which produces a better outcome than “consider all possible actions; calculate the expected value of each action and perform the one with the highest expected value.”
Our evolutionarily conditioned responses and what we’ve gleaned from learning algorithms are not designed for optimal philanthropy but are nevertheless be substantially more powerful and/or relevant than explicit reasoning in some domains relevant to optimal philanthropy.
Possessing a given cognitive bias can be rational conditional on possessing another cognitive bias. Attempting to remove cognitive biases one at a time need not result in monotonic improvement. See Phil Goetz’s Reason as a memetic immune disorder.
I think you’ve found the source of our disagreement here. I fully agree with the use of time-saving heuristics, I think the difference is that I want all my heuristics to ultimately be explicitly justified, not necessarily every time you use them, but at least once.
Knowing the reason for a heuristic is useful, it can help you refine it, it can tell you whether or not its safe to abandon it in certain situations, and sometimes it can alert you to one heuristic that really is just a bias. To continue with your example, I agree that checking whether it would be smart to take your hand of the cooker every single time is stupid, but I don’t see what’s wrong with at some point pausing for a moment, just to consider whether there might be unforeseen benefits to keeping your hand on the cooker (to my knowledge there aren’t).
An analogy can be made to mathematics, you don’t explicitly prove everything from the axioms, but you rely on established results which in turn rest on others and hopefully trace back to the axioms eventually, if that’s not the case you start to worry.
As a second point, time saving heuristics are at their most useful when time matters. For instance, if I had to choose a new charity every day, or if for some reason I only had ten minutes to choose one and my choice would then be set in stone for eternity, then time saving heuristics would be the order of the day. As I need only choose one, and can safely take days or even weeks to make that decision without harming the charities in any significant way, and furthermore can change my choice whenever I want if new information comes to light, it seems like the use of time-savers would be pure laziness on my part, and tha’ts just for me as an individual, for an organisation like Givewell which exists solely to perform this one task, they are inexcusable.
It can be beneficial, but not predictably so. If I know that I possess cognitive bias A, it is better to try to get rid of it than to introduce a second cognitive bias B.
Agreed, but it should result in improvement on average. Once again we come back to the issue of uncertainty aversion, whether its worthwhile to gamble when the odds are in your favour.
I loved that when I first read it, but lately I’m unsure. If his hypothesis is correct, it would suggest that most religions are completely harmless in their ‘natural environment’, but excluding the last few centuries that doesn’t seem true.
Thanks for engaging with me.
The heuristics that we use are too numerous and of too much complexity to be possible to explicitly justify all of them. Turning your mathematics analogy on its head, note that mathematicians have very little knowledge of the heuristics that they use to discover and prove theorems. Poincare wrote some articles about this; if interested see The Value of Science.
There are over a million charities in the US alone. GiveWell currently has (around) 5 full time staff. If GiveWell were to investigate every charity this year. each staff member would have to investigate over 500 charities per day. Moreover, doing comparison of even two charities can be exceedingly tricky. I spent ~ 10 hours a week for five months investigating the cost effectiveness of school based deworming and I still don’t know whether it’s a better investment than bednets. So I strongly disagree that GiveWell shouldn’t use time saving heuristics.
As for for SIAI vs. VillageReach, it may well be that SIAI is a better fit for your values than VillageReach is. I currently believe that donating to SIAI has higher utilitarian expected value than donating to VillagReach but also presently believe that a few years of searching will yield a charity at least twice as cost-effective than either at the margin. I have been long been hoping for GiveWell to research x-risk charities. See my comment here. Over the next year I’ll be researching x-risk reduction charities myself.
It’s not clear to me that overcoming a generic bias should improve one’s rationality on average. This is an empirical question with no data but anecdotal evidence. Placebo effect and selection bias may suffice to explain a subjective sense that overcoming biases is conducive to rationality. Anyway, on the matter at hand, I concur with Holden’s view that relying entirely on explicit formulas does not maximize expected value and that one should incorporate some measure of subjective judgment (as to how much, I am undecided).
Interesting. Have you explained these beliefs anywhere?
No. I’ll try to explicate my thoughts soon. Thanks for asking.
Aren’t the numbers here a little specious? There may be over a million charities (is this including nonprofits which run social clubs? there are a lot of categories of nonprofits), but we can dismiss hundreds of thousands with just a cursory examination of their goals or their activity level. For example, could any sports-related charity come within an order of magnitude or two of a random GiveWell approved charity? Could any literary (or heck humanities charity) do that without specious Pascal’s Wager-type arguments?
This isn’t heuristic, this is simply the nature of the game. Some classes of activities just aren’t very useful from the utilitarian perspective. (Imagine Christianity approved of moving piles of sand with tweezers and hence there were a few hundred thousand charities surrounding this activity—every town or city has a charity or three providing subsidized sand pits and sand scholarships. If a GiveWell dismissed them all out of hand, would you attack that too as a heuristic?)
Notice the two examples you picked—deworming and bed nets. Both are already highly similar: public health measures. You didn’t pick, ‘buy new pews for the local church’ and ‘deworm African kids’.
This looks a lot like a heuristic to me. Is “heuristic” derogative around here?
Why not go with the real-world version? (Especially since it involves ritual destruction of those piles of sand.)
Yes; heuristics allow errors and are suboptimal in many respects. (That’s why they are a ‘heuristic’ and not ‘the optimal algorithm’ or ‘the right answer’ or other such phrases.)
I don’t cite the sand mandalas both because they simply didn’t come to mind, and they’re quite beautiful.
I agree with most of what you say here, but fear that the discussion is veering in the direction of a semantics dispute. So I’ll just clarify my position by saying:
• Constructing an airtight argument for the relative lack of utilitarian value of e.g. all humanities charities relative to VillageReach is a nontrivial task (and indeed, may be impossible).
• Even if one limits oneself to the consideration of 10^(-4) of the field of all charities, one is still left with a very sizable analytical problem.
•The use of time saving heuristics is essential to getting anything valuable done.
Note that in the link that you’re referring to I argue both for and against local philanthropy as opposed to global philanthropy. Anyway, I wasn’t referencing the post as a whole, I was referencing the point about the “act locally” heuristic solving a coordination problem that naive EEV fails to solve. It’s not clear that it’s humanly possible (or desirable) to derive that that heuristic from first principles. Rather than trying to replace naive EEV with sophisticated EEV; one might be better off with scraping exclusive use of EEV altogether.