You could get your framework by adapting existing frameworks to fit your meta-agent utility function. Examples:
The utilitarianism framework which seeks to maximize the sum of utility over all agents.
The Rawlsian maximin framework which seeks to maximize the utility of the worst-off agent.
The Nozickian entitlement framework which seeks to give each agent the maximal entitlement they could have, given the constraints of the system.
The Nussbaumian capability approach which seeks to give each agent the maximal capability they could have, given the constraints of the system.
I think in the end you would get stuck on the unsolved problem of balancing the needs of individuals and the collective.
This is alignment’s Attention Is All You Need moment