For the record, the canonical solution to the object-level problem here is Shapley Value. I don’t disagree with the meta-level point, though: a calculation of Shapley Value must begin with a causal model that can predict outcomes with any subset of contributors removed.
I walked through some examples of Shapley Value here, and I’m not so sure it satisfies exactly what we want on an object level. I don’t have a great realistic example here, but Shapley Value assigns counterfactual value to individuals who did in fact not contribute at all, if they would have contributed were your higher-performers not present. So you can easily have “dead weight” on a team which has a high Shapley Value, as long as they could provide value if their better teammates were gone.
Shapely values are very cool. Let me mention some cool facts:
They arise in (cooperative) game theory but also in ML when doing credit allocation a combined prediction from mixing predictions from different modules of a system.
One piece of evidence of their fundamentalness is that they arise naturally from the Hodge theory on the hypercube of a coalition game: https://arxiv.org/abs/1709.08318
Another interesting fact I learned from Davidad: Shapley values are not compositional: a group of actors can increase their total Shapley value by forming a single cabal such that individuals within that cabal refuse to cooperate with individuals outside the cabal without the rest of the cabal joining in. This is can be a measure of collusion potential.
For the record, the canonical solution to the object-level problem here is Shapley Value. I don’t disagree with the meta-level point, though: a calculation of Shapley Value must begin with a causal model that can predict outcomes with any subset of contributors removed.
I walked through some examples of Shapley Value here, and I’m not so sure it satisfies exactly what we want on an object level. I don’t have a great realistic example here, but Shapley Value assigns counterfactual value to individuals who did in fact not contribute at all, if they would have contributed were your higher-performers not present. So you can easily have “dead weight” on a team which has a high Shapley Value, as long as they could provide value if their better teammates were gone.
Thanks for the pointer!
Shapely values are very cool. Let me mention some cool facts:
They arise in (cooperative) game theory but also in ML when doing credit allocation a combined prediction from mixing predictions from different modules of a system.
One piece of evidence of their fundamentalness is that they arise naturally from the Hodge theory on the hypercube of a coalition game: https://arxiv.org/abs/1709.08318
Another interesting fact I learned from Davidad: Shapley values are not compositional: a group of actors can increase their total Shapley value by forming a single cabal such that individuals within that cabal refuse to cooperate with individuals outside the cabal without the rest of the cabal joining in. This is can be a measure of collusion potential.