Well, AI 1 sends a proposal for a joint decision algorithm to AI 2, then AI 2 sends a counter-proposal and they bargain. They do this until they agree on the joint decision algorithm. They then jointly build a new AI and each monitors the construction process to ensure that the new AI really implements the joint decision algorithm that they agreed on. Finally they each transfer all resources to the new AI and shut down.
Does that answer your question, or were you asking something else?
I was asking for a rigorous model of that: some controlled tournament setting that you could implement in Python today, and two small programs you could implement in Python today that would do what you mean. Surely the hard AI issues shouldn’t pose a problem because you can always hardcode any “understanding” that goes on, e.g. “this here incoming packet says that I should do X”. At least that’s my direction of inquiry, because I’m too scared of making hard-to-notice mistakes when handwaving about such things.
In case anyone is wondering what happened to this conversation, cousin_it and I took it offline. I’ll try to write up a summary of that and my current thoughts, but in the mean time here’s what I wrote as the initial reply:
Ok, I’ll try. Assume there is a “joint secure construction service” which takes as input a string from each player, and constructs one machine for each distinct string that it receives, using that string as its program. Each player then has the option to transfer its “resources” to the constructed machine, in which case the machine will then play all of the player’s moves. Then the machines and any players who chose to not transfer “resources” will play the base game (which let’s say is PD).
The Nash equilibrium for this game should be for all players to choose a common program and then choose to transfer resources. That program plays “cooperate” if every did that, otherwise it plays “defect”.
Well, AI 1 sends a proposal for a joint decision algorithm to AI 2, then AI 2 sends a counter-proposal and they bargain. They do this until they agree on the joint decision algorithm. They then jointly build a new AI and each monitors the construction process to ensure that the new AI really implements the joint decision algorithm that they agreed on. Finally they each transfer all resources to the new AI and shut down.
Does that answer your question, or were you asking something else?
I was asking for a rigorous model of that: some controlled tournament setting that you could implement in Python today, and two small programs you could implement in Python today that would do what you mean. Surely the hard AI issues shouldn’t pose a problem because you can always hardcode any “understanding” that goes on, e.g. “this here incoming packet says that I should do X”. At least that’s my direction of inquiry, because I’m too scared of making hard-to-notice mistakes when handwaving about such things.
In case anyone is wondering what happened to this conversation, cousin_it and I took it offline. I’ll try to write up a summary of that and my current thoughts, but in the mean time here’s what I wrote as the initial reply:
Ok, I’ll try. Assume there is a “joint secure construction service” which takes as input a string from each player, and constructs one machine for each distinct string that it receives, using that string as its program. Each player then has the option to transfer its “resources” to the constructed machine, in which case the machine will then play all of the player’s moves. Then the machines and any players who chose to not transfer “resources” will play the base game (which let’s say is PD).
The Nash equilibrium for this game should be for all players to choose a common program and then choose to transfer resources. That program plays “cooperate” if every did that, otherwise it plays “defect”.