A system of AI services is not equivalent to a utility maximizing agent
I think this section of the report would be stronger if you showed that CAIS or Open Agencies in particular are not equivalent to an utility maximizing agent. You’re right that their are multi-agent systems (like CDTs in a prisoner’s dilemma) with this property, but not every system of multiple agents is inequivalent to utility maximization.
I think this section of the report would be stronger if you showed that CAIS or Open Agencies in particular are not equivalent to an utility maximizing agent. You’re right that their are multi-agent systems (like CDTs in a prisoner’s dilemma) with this property, but not every system of multiple agents is inequivalent to utility maximization.