I favor a multi-agent solution which includes both human and machine agents. But, yes, a multi-machine solution may well differ from a unified artificial rational agent. For one thing, the composite will not be itself a rational agent (it may split its charitable contributions between two different charities, for example. :)
ETA: More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
It sounds as though you are thinking about the early days.
ISTM that a single creature could grow in the manner you describe a coalition growing—by assimilation and compromise. Its might not naturally favour behaving in that way—but it is possible to make an agent with whatever values you like.
More to the point, if a single creature forms from a global government, or the internet, it will probably start off in a pretty inclusive state. Only the terrorists will be excluded. There is no monopolies and mergers commission at that level, just a hangover from past, fragmented times.
A multi-machine solution? Is that so very different from one machine with a different internal architecture?
I favor a multi-agent solution which includes both human and machine agents. But, yes, a multi-machine solution may well differ from a unified artificial rational agent. For one thing, the composite will not be itself a rational agent (it may split its charitable contributions between two different charities, for example. :)
ETA: More to the point, a singleton must self-modify to ‘grow’ in power and intelligence, and will strive to preserve its utility function (values) in the course of doing so. A coalition, on the other hand, grows in power by creating or enlisting new members. So, for example, rogue AI’s can be incorporated into a coalition, whereas a singleton will have to defeat and destroy them. Furthermore, the political balance within a coalition may shift over time, as agents who are willing to delay gratification gain in power, and agents who demand instant gratification lose relative power. And as the political balance shifts, so does the effective composite utility function.
It sounds as though you are thinking about the early days.
ISTM that a single creature could grow in the manner you describe a coalition growing—by assimilation and compromise. Its might not naturally favour behaving in that way—but it is possible to make an agent with whatever values you like.
More to the point, if a single creature forms from a global government, or the internet, it will probably start off in a pretty inclusive state. Only the terrorists will be excluded. There is no monopolies and mergers commission at that level, just a hangover from past, fragmented times.