They went from running on two different nodes with different code, with each node controlling its own resources, to running on just one node with some code that they both agreed upon, that controls both of their resources.
I added “(which they both agreed upon)” to the post. Does that clear it up, or is the unclarity somewhere else?
What is the difference between two (interacting) nodes controlling their own resources and (instead of them) one distributed node controlling the resources? What do you mean by “control”? What is gained by calling basically the same thing “one node” and not “two nodes”? If the important part is “agreed-upon”, how does it work in terms of the two interacting processes, and how to translate it in a statement about the joint process?
I’m confused about your confusion. I’m using familiar human terms, and assuming straightforward (though not necessarily easy) translation into AI implementation. Is there no obvious counterpart for “control” and “agreed-upon” in AI?
What if we assume that the two original AIs are human uploads? Does that help?
Or taken two flesh-and-blood humans, and suppose they jointly program a robot and then die, physically handing their assets over to the robot before they die. Would you also say the two humans can already be considered as a single distributed process, and there’s nothing to be gained by calling the same thing “one node” and not “two nodes”?
What if the first AI can just ask the second AI to do anything with its resources, and the second AI just does that? Just because the AIs agree to some action (merging), doesn’t mean that the action was really an optimal choice for them: they could be wrong. You need a more formal model to deal with such issues (for example, the AIs could have a utility function over outcomes, but then again we face the issue of bargaining).
I don’t see how this setting helps to reduce bargaining/cooperation.
I assume that the original AIs use a standard decision theory. You’re right that it doesn’t reduce bargaining. That wasn’t the goal. The goal is to allow any agreements that do form to be enforced. In other words, I’m trying to eliminate courts, not bargaining.
Of course a solution that also eliminates bargaining would be even better, but that might be too ambitious. Feel free to prove me wrong though. :)
Agreements are solutions to bargaining, so how can you have agreements without solving bargaining? In other words, if you just assign some agreement without considering a process of reaching it, why do you need to consider the two agents before the merge at all?
I don’t just assign some agreement without considering a process of reaching it. I make that a separate problem to be solved, and just assume here that AIs do have some way to reach agreements by bargaining, in order to focus on the mechanism for enforcement of agreements.
ETA: Do you think common knowledge of source code can help with the bargaining problem in some way? Maybe that’s the insight I’m missing?
They went from running on two different nodes with different code, with each node controlling its own resources, to running on just one node with some code that they both agreed upon, that controls both of their resources.
I added “(which they both agreed upon)” to the post. Does that clear it up, or is the unclarity somewhere else?
What is the difference between two (interacting) nodes controlling their own resources and (instead of them) one distributed node controlling the resources? What do you mean by “control”? What is gained by calling basically the same thing “one node” and not “two nodes”? If the important part is “agreed-upon”, how does it work in terms of the two interacting processes, and how to translate it in a statement about the joint process?
I’m confused about your confusion. I’m using familiar human terms, and assuming straightforward (though not necessarily easy) translation into AI implementation. Is there no obvious counterpart for “control” and “agreed-upon” in AI?
What if we assume that the two original AIs are human uploads? Does that help?
Or taken two flesh-and-blood humans, and suppose they jointly program a robot and then die, physically handing their assets over to the robot before they die. Would you also say the two humans can already be considered as a single distributed process, and there’s nothing to be gained by calling the same thing “one node” and not “two nodes”?
What if the first AI can just ask the second AI to do anything with its resources, and the second AI just does that? Just because the AIs agree to some action (merging), doesn’t mean that the action was really an optimal choice for them: they could be wrong. You need a more formal model to deal with such issues (for example, the AIs could have a utility function over outcomes, but then again we face the issue of bargaining).
I don’t see how this setting helps to reduce bargaining/cooperation.
I assume that the original AIs use a standard decision theory. You’re right that it doesn’t reduce bargaining. That wasn’t the goal. The goal is to allow any agreements that do form to be enforced. In other words, I’m trying to eliminate courts, not bargaining.
Of course a solution that also eliminates bargaining would be even better, but that might be too ambitious. Feel free to prove me wrong though. :)
Agreements are solutions to bargaining, so how can you have agreements without solving bargaining? In other words, if you just assign some agreement without considering a process of reaching it, why do you need to consider the two agents before the merge at all?
I don’t just assign some agreement without considering a process of reaching it. I make that a separate problem to be solved, and just assume here that AIs do have some way to reach agreements by bargaining, in order to focus on the mechanism for enforcement of agreements.
ETA: Do you think common knowledge of source code can help with the bargaining problem in some way? Maybe that’s the insight I’m missing?