I think sharing all information is doable. As for priors, there’s a beautiful LW trick called “probability as caring” which can almost always make priors identical. For example, before flipping a coin I can say that all good things in life will be worth 9x more to me in case of heads than tails. That’s purely a utility function transformation which doesn’t touch the prior, but for all decision-making purposes it’s equivalent to changing my prior about the coin to 90⁄10 and leaving the utility function intact. That handles all worlds except those that have zero probability according to one of the AIs. But in such worlds it’s fine to just give the other AI all the utility.
I think the idea that if one AI says there is a 50% chance of heads, and the other AI says there is a 90% chance of heads, the first AI can describe the second AI as knowing that there is a 50% chance, but caring more about the heads outcome. Since it can redescribe the other’s probabilities as matching its own, agreement on what should be done will be possible. None of this means that anyone actually decides that something will be worth more to them in the case of heads.
the first AI can describe the second AI as knowing that there is a 50% chance, but caring more about the heads outcome.
First of all this makes any sense solely in the decision-taking context (and not in the forecast-the-future context). So this is not about what will actually happen but about comparing the utilities of two outcomes. You can, indeed, rescale the utility involved in a simple case, but I suspect that once you get to interdependencies and non-linear consequences things will get more hairy, if possible at all.
Besides, this requires you to know the utility function in question.
Not if the “resource” is the head of one of the rational agents on a plate.
The Aumann theorem requires identical priors and identical sets of available information.
I think sharing all information is doable. As for priors, there’s a beautiful LW trick called “probability as caring” which can almost always make priors identical. For example, before flipping a coin I can say that all good things in life will be worth 9x more to me in case of heads than tails. That’s purely a utility function transformation which doesn’t touch the prior, but for all decision-making purposes it’s equivalent to changing my prior about the coin to 90⁄10 and leaving the utility function intact. That handles all worlds except those that have zero probability according to one of the AIs. But in such worlds it’s fine to just give the other AI all the utility.
In all cases? Information is power.
There is an old question that goes back to Abraham Lincoln or something:
If you call a dog’s tail a leg, how many legs does a dog have?
I think the idea that if one AI says there is a 50% chance of heads, and the other AI says there is a 90% chance of heads, the first AI can describe the second AI as knowing that there is a 50% chance, but caring more about the heads outcome. Since it can redescribe the other’s probabilities as matching its own, agreement on what should be done will be possible. None of this means that anyone actually decides that something will be worth more to them in the case of heads.
First of all this makes any sense solely in the decision-taking context (and not in the forecast-the-future context). So this is not about what will actually happen but about comparing the utilities of two outcomes. You can, indeed, rescale the utility involved in a simple case, but I suspect that once you get to interdependencies and non-linear consequences things will get more hairy, if possible at all.
Besides, this requires you to know the utility function in question.