(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it’s a more general issue, or at least a more general way of putting the same issues.)
I’m wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.
If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)
Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don’t understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.
More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.
I think you’ve done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.
In other words: how do you avoid the pathologies of No Child Left Behind, where “reform” becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?
This issue is touched by the original post, but not at all deeply.
(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it’s a more general issue, or at least a more general way of putting the same issues.)
I’m wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.
If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)
Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don’t understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.
More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.
I think you’ve done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.
In other words: how do you avoid the pathologies of No Child Left Behind, where “reform” becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?
This issue is touched by the original post, but not at all deeply.