In Aumann, you have two Bayesian reasoners who are motivated by believing true things, who because they’re reasoning in similar ways can use the output of the other reasoner’s cognitive process to refine their own estimate, in a way that eventually converges.
Here, the reasoners are non-Bayesian, and so we can’t reach the same sort of conclusions about what they’ll eventually believe. And it seems like this idea relies somewhat heavily on game theory-like considerations, where a statement is convincing not so much because the blue player said it but because the red player didn’t contradict it (and, since they have ‘opposing’ goals, this means it’s true and relevant).
There’s a piece that’s Aumann-like in that it’s asking “how much knowledge can we extract from transferring small amounts of a limited sort of information?”—here, we’re only transferring “one pixel” per person, plus potentially large amounts of discussion about what those pixels would imply, and seeing how much that discussion can get us.
But I think ‘convergence’ is the wrong sort of way to think about it. Instead, it seems more like asking “how much of a constraint on lying is it to have it such that a someone with as much information as you could expose one small fact related to the lie you’re trying to tell?”. It could be the case that this means liars basically can’t win, because their hands are tied behind their backs relative to the truth; or it could be the case that debate between adversarial agents is a fundamentally bad way to arrive at the truth, such that these adversarial approaches can’t get us the sort of trust that we need. (Or perhaps we need some subtle modifications, and then it would work.)
That makes sense. I’d frame that last bit more as: which bit, if revealed would screen off the largest part of the dataset? Which might bridge this to more standard search strategies. Have you seen Argumentation in Artificial Intelligence?
Is this asking whether ontology generation via debate is guaranteed to converge? Is this moving aumann’s agreement ‘up a level’?
In Aumann, you have two Bayesian reasoners who are motivated by believing true things, who because they’re reasoning in similar ways can use the output of the other reasoner’s cognitive process to refine their own estimate, in a way that eventually converges.
Here, the reasoners are non-Bayesian, and so we can’t reach the same sort of conclusions about what they’ll eventually believe. And it seems like this idea relies somewhat heavily on game theory-like considerations, where a statement is convincing not so much because the blue player said it but because the red player didn’t contradict it (and, since they have ‘opposing’ goals, this means it’s true and relevant).
There’s a piece that’s Aumann-like in that it’s asking “how much knowledge can we extract from transferring small amounts of a limited sort of information?”—here, we’re only transferring “one pixel” per person, plus potentially large amounts of discussion about what those pixels would imply, and seeing how much that discussion can get us.
But I think ‘convergence’ is the wrong sort of way to think about it. Instead, it seems more like asking “how much of a constraint on lying is it to have it such that a someone with as much information as you could expose one small fact related to the lie you’re trying to tell?”. It could be the case that this means liars basically can’t win, because their hands are tied behind their backs relative to the truth; or it could be the case that debate between adversarial agents is a fundamentally bad way to arrive at the truth, such that these adversarial approaches can’t get us the sort of trust that we need. (Or perhaps we need some subtle modifications, and then it would work.)
That makes sense. I’d frame that last bit more as: which bit, if revealed would screen off the largest part of the dataset? Which might bridge this to more standard search strategies. Have you seen Argumentation in Artificial Intelligence?