Well said. And this middle ground is exactly what I am worried about losing as companies add more AI to their operations—human managers can and do make many subtle choices that trade profit against other values, but naive algorithmic profit maximization will not. This is why my research is on metrics that may help align commercial AI to pro-social outcomes.
Naive algorithmic anything-optimization will not make those subtle trade-offs. Metric maximization run on humans is already a major failure point of large businesses, and the best an AI that uses metrics can do is draw awareness to the fact that the metrics that don’t start out bad become bad over time.
There’s a middle ground between having an organization be profitable, and an organization optimizing for profitability.
Well said. And this middle ground is exactly what I am worried about losing as companies add more AI to their operations—human managers can and do make many subtle choices that trade profit against other values, but naive algorithmic profit maximization will not. This is why my research is on metrics that may help align commercial AI to pro-social outcomes.
Naive algorithmic anything-optimization will not make those subtle trade-offs. Metric maximization run on humans is already a major failure point of large businesses, and the best an AI that uses metrics can do is draw awareness to the fact that the metrics that don’t start out bad become bad over time.