It seems like replacing two agents A and B by a single agent that optimizes for their welfare function would avoid the issue of punishment. I guess that doing this might be feasible in some cases for artificial agents (as a single agent optimizing for the welfare function is a simpler object than the two-agent dynamics including punishment) and potentially understudied, as the solution seems harder to implement for humans (even though human solutions to collective action problems at least resemble the approach). One key problem might be finding a welfare function that both agents agree on, especially if there is information assymetry.
Any thought on this?
Edit: The approach seems to be most trivial when both agents share their world model and optimize for explicit utilities over this world model. More general, two principals with similar amounts of compute and similarly easily optimizable utility functions are most likely better off building an agent that optimizes for their welfare instead of two agents that need to learn to compete and cooperate. Optimizing for the welfare function applied to the agent’s value functions can be done by a somewhat straightforward modification of Q-learning or (in the case of differentiable welfare) policy gradient methods.
I definitely think it’s worth exploring. I have the intuition that creating a single agent might be difficult for various logistical and political reasons, and so it feels more robust to figure out the multiagent case. But I would certainly like to have a clearer picture of how and under what circumstances several AI developers might implement a single compromise agent.
It seems like replacing two agents A and B by a single agent that optimizes for their welfare function would avoid the issue of punishment. I guess that doing this might be feasible in some cases for artificial agents (as a single agent optimizing for the welfare function is a simpler object than the two-agent dynamics including punishment) and potentially understudied, as the solution seems harder to implement for humans (even though human solutions to collective action problems at least resemble the approach). One key problem might be finding a welfare function that both agents agree on, especially if there is information assymetry.
Any thought on this?
Edit: The approach seems to be most trivial when both agents share their world model and optimize for explicit utilities over this world model. More general, two principals with similar amounts of compute and similarly easily optimizable utility functions are most likely better off building an agent that optimizes for their welfare instead of two agents that need to learn to compete and cooperate. Optimizing for the welfare function applied to the agent’s value functions can be done by a somewhat straightforward modification of Q-learning or (in the case of differentiable welfare) policy gradient methods.
I definitely think it’s worth exploring. I have the intuition that creating a single agent might be difficult for various logistical and political reasons, and so it feels more robust to figure out the multiagent case. But I would certainly like to have a clearer picture of how and under what circumstances several AI developers might implement a single compromise agent.