I agree that revenue is a key part of the organizational feedback loop that non-profits do not have, and it’s often a problem. However, for-profits have a tendency to turn toward revenue. To the extent that we care about what an organization does for society, we should care about organizational drift caused by chasing revenue. I believe it’s an open question whether lack of revenue feedback in non-profits or organizational drift cause by revenue alignment in for-profits is currently a bigger problem in society.
I also think you may be underestimating the type and scope of programmatic evaluation that non-profits do. This is an extremely rich area, the goal of which is to ensure that operations are correctly aligned to the desired outcome. One well-developed part of this is non-financial metrics, for example. “Money coming in” is a very convenient metric with which to measure your success, but it is far from the only plausible feedback signal. If “clicks” are the false god of the attention economy, then revenue is the false god of the financial economy—in both cases easy to measure, sometimes aligned to value and sometimes not.
Well said. And this middle ground is exactly what I am worried about losing as companies add more AI to their operations—human managers can and do make many subtle choices that trade profit against other values, but naive algorithmic profit maximization will not. This is why my research is on metrics that may help align commercial AI to pro-social outcomes.
Naive algorithmic anything-optimization will not make those subtle trade-offs. Metric maximization run on humans is already a major failure point of large businesses, and the best an AI that uses metrics can do is draw awareness to the fact that the metrics that don’t start out bad become bad over time.
We could look at donors’ public materials, for example evaluation requirements listed in grant applications. We could examine the programs of conferences or workshops on philanthropy and see how often this topic is discussed. We could investigate the reports and research literature on this topic. But I don’t know how to define enough concern.
I agree that revenue is a key part of the organizational feedback loop that non-profits do not have, and it’s often a problem. However, for-profits have a tendency to turn toward revenue. To the extent that we care about what an organization does for society, we should care about organizational drift caused by chasing revenue. I believe it’s an open question whether lack of revenue feedback in non-profits or organizational drift cause by revenue alignment in for-profits is currently a bigger problem in society.
I also think you may be underestimating the type and scope of programmatic evaluation that non-profits do. This is an extremely rich area, the goal of which is to ensure that operations are correctly aligned to the desired outcome. One well-developed part of this is non-financial metrics, for example. “Money coming in” is a very convenient metric with which to measure your success, but it is far from the only plausible feedback signal. If “clicks” are the false god of the attention economy, then revenue is the false god of the financial economy—in both cases easy to measure, sometimes aligned to value and sometimes not.
There’s a middle ground between having an organization be profitable, and an organization optimizing for profitability.
Well said. And this middle ground is exactly what I am worried about losing as companies add more AI to their operations—human managers can and do make many subtle choices that trade profit against other values, but naive algorithmic profit maximization will not. This is why my research is on metrics that may help align commercial AI to pro-social outcomes.
Naive algorithmic anything-optimization will not make those subtle trade-offs. Metric maximization run on humans is already a major failure point of large businesses, and the best an AI that uses metrics can do is draw awareness to the fact that the metrics that don’t start out bad become bad over time.
Any metric can be gamed or can distort behavior, it’s true. No metric can substitute for judgment.
Re programmatic evaluation: It’s true that nonprofits *can* do this, but that only matters if *donors* on the whole care. This is why I said:
My sense is that donors do care about evaluation, on the whole. It’s not just GiveWell / Open Philanthropy / EA who think about this :P
See for example https://www.rockpa.org/guide/assessing-impact/
My sense is that they don’t care nearly enough.
How could we find evidence one way or another?
We could look at donors’ public materials, for example evaluation requirements listed in grant applications. We could examine the programs of conferences or workshops on philanthropy and see how often this topic is discussed. We could investigate the reports and research literature on this topic. But I don’t know how to define enough concern.