It’s not just a case of any two agents having fuzzy approximations to the same world view. In the least convenient case, agents will start off with radically different beliefs, and those beliefs will affect what they consider to be evidence, and how they interpret evidence. So there is no reason for agents to ever converge in the least convenient case .
Aumann’s theorem assumes rational agents. Such agents consider every observation to be evidence, and update the probability of every hypothesis in the distribution appropriately. That includes agents who start with radically different beliefs, because for rational agents “belief” is just a distribution over possible hypotheses.
The problem is that each hypothesis is a massively multidimensional model, and no real person can even properly fit one in their mind. There is no hope whatsoever that anyone can accurately update weightings over an enormous number of hypotheses on every observation.
So we live in an even less convenient world than the “least convenient case” that was proposed. Nobody in the real world is rational in the sense of Aumann’s theorem. Not even a superintelligent AGI ever will be, because the space of all possible hypotheses about the world is always enormously more complex than the actual world, and the actual world is more complex than any given agent in it.
It’s not just a case of any two agents having fuzzy approximations to the same world view. In the least convenient case, agents will start off with radically different beliefs, and those beliefs will affect what they consider to be evidence, and how they interpret evidence. So there is no reason for agents to ever converge in the least convenient case .
Aumann’s theorem assumes the most convenient case
Aumann’s theorem assumes rational agents. Such agents consider every observation to be evidence, and update the probability of every hypothesis in the distribution appropriately. That includes agents who start with radically different beliefs, because for rational agents “belief” is just a distribution over possible hypotheses.
The problem is that each hypothesis is a massively multidimensional model, and no real person can even properly fit one in their mind. There is no hope whatsoever that anyone can accurately update weightings over an enormous number of hypotheses on every observation.
So we live in an even less convenient world than the “least convenient case” that was proposed. Nobody in the real world is rational in the sense of Aumann’s theorem. Not even a superintelligent AGI ever will be, because the space of all possible hypotheses about the world is always enormously more complex than the actual world, and the actual world is more complex than any given agent in it.