We also don’t meet one of the other requirements of Aumann’s Agreement Theorem: we don’t have the same prior beliefs. This is likely intuitively true to you, but it’s worth proving. For us to all have the same prior beliefs we’d need to all be born with the same priors. This seems unlikely, but for the sake of argument let’s suppose it’s true that we are.
I want to put up a bit of a defense of the common prior assumption, although in reality I’m not so insistent on it.
First of all, we aren’t ideal Bayesian agents, so what we are as a baby isn’t necessarily what we should identify as “our prior”. If we think of ourselves as trying to approximate an ideal Bayesian reasoner, then it seems like part of the project is constructing an ideal prior to start with. EG, many people like Solomonoff’s prior. These people could be said to agree on a common prior in an important way. (Especially if they furthermore can agree on a UTM to use.)
But we can go further. Suppose that two people currently disagree about the Solomonoff prior. It’s plausible that they have reasons for doing so, which they can discuss. This involves some question-begging, since it assumes the kind of convergence that we’ve set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it. The point is that philosophical disagreements about priors can often be resolved, so even if two people can’t initially agree on the Solomonoff prior, we might still expect convergence on that point after sufficient discussion.
In this picture, the disagreement is all about the approximation, and not at all about non-common priors. If we could approximate ideal rationality better, we could agree.
Another argument in favor of a common-prior assumption is that even if we model people as starting out with different priors, we expect people to have experienced actually quite a lot of the world before they come together to discuss some specific disagreement. In your writing, you treat the different data as a reason for diverging opinions—but taking another perspective, we might argue that they’ve both experienced enough data that they should have broadly converged on a very large number of beliefs, EG about how things fall to the ground when unsupported, what things dissolve in water, how other humans tend to behave, et cetera.
We might broadly (imprecisely) argue that they’ve drawn different data from the same distribution, so after enough data, they should reach very similar conclusions.
Since “prior” is a relative term (every posterior acts as a prior for the next update), we could then argue that they’ve probably come to the current situation with very similar priors about that situation (that is, would have done if they’d been ideally rational bayesians the whole time) - even if they don’t agree on, say the Solomonoff prior.
The practical implication of this would be something like: when disagreeing, people actually know enough facts about the world to come to agree, if only they could properly integrate all the information.
But we can go further. Suppose that two people currently disagree about the Solomonoff prior. It’s plausible that they have reasons for doing so, which they can discuss.
Sure, but where does that lead? If they discuss it using basically the same epistemology, htey might agree, and if they have fundamentally epistemology, they probably. They could have a discussion about their infra epistemology, but then the same dichotomy re-occurs a t a deeper level. There’s no way of proving that two people who disagree can have a productive discussion that leads to agreement without assuming some measuer of pre-existing agreement at some level.
his involves some question-begging,
Yep.
but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp
That doesn’t imply the incoherence of the anti-agreement camp. Coherence is like that: it’s a rather weak condition, particularly in the sense that it can’t show there is a single coherent view. If you believe there is a single truth, you shouldn’t treat coherence as the sole criterion of truth.
Another argument in favor of a common-prior assumption is that even if we model people as starting out with different priors, we expect people to have experienced actually quite a lot of the world before they come together to discuss some specific disagreement.
But that doesn’t imply that they will converge without another question-begging assumption that they will interpret and weight the evidence similarly. One person regards the bible as evidence, another does not.
We might broadly (imprecisely) argue that they’ve drawn different data from the same distribution, so after enough data, they should reach very similar conclusions.
If one person always rejects another’s “data” that need not happen. You can have an infinite amount of data that is all of one type. Infinite in quantity doesn’t imply infinitely varied.
if only they could properly integrate all the information.
They need to agree on what counts as information (data, evidence) in the first place.
That doesn’t imply the incoherence of the anti-agreement camp.
I basically think that agreement-bayes and non-agreement-bayes are two different models with various pros and cons. Both of them are high-error models in the sense that they model humans as an approximation of ideal rationality.
Coherence is like that: it’s a rather weak condition, particularly in the sense that it can’t show there is a single coherent view. If you believe there is a single truth, you shouldn’t treat coherence as the sole criterion of truth.
I think this is reasoning too loosely about a broad category of theories. An individual coherent view can coherently think there’s a unique truth. I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent. For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
An individual coherent view can coherently think there’s a unique truth
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent.
Well, I have been having to guess what “coherence” means throughout.
For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not. ETA:The claim is not that Bayesian lack of convergence comes from Bayesian probablism, the claim is that it comes from starting with radically different priors, and only accepting updates that are consistent with them—the usual mechanism of coherentist non-convergence.
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong.
Well, I have been having to guess what “coherence” means throughout.
Sorry. My quote you were originally responding to:
This involves some question-begging, since it assumes the kind of convergence that we’ve set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it.
By ‘coherence’ here, I simply meant non-contradictory-ness. Of course I can’t firmly establish that something is non-contradictory without some kind of consistency proof. What I meant was, in the paragraph in question, I’m only trying to sketch a possible view, to show some evidence that it can’t be easily dismissed. I wasn’t trying to discuss coherentism or invoke it in any way.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not.
Not sure what you mean here.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views. Does this seem very roughly correct to you, or like a mischaracterization?
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong
Of course, I didn’t say “corrupts ”. If you don’t engage in meta level reasoning , you won’t know what your object level reasoning is capable of, for better or worse. So you don’t get get to assume your object level reasoning is fine just because you’ve never thought about it. So meta level reasoning is revealing flaws, not creating them.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views.
What matters is whether there is at least one view that works, that solves epistemology. If what you mean by “possible” is some lower bar than working fully and achieving all the desiderata, that’s not very interesting because everyone know there are multiple flawed theories.
If you can spell out an abstract rationality to achieve Agreement, and Completeness and Consistency, and. … then by all means do so. I have not seen it done yet.
I want to put up a bit of a defense of the common prior assumption, although in reality I’m not so insistent on it.
First of all, we aren’t ideal Bayesian agents, so what we are as a baby isn’t necessarily what we should identify as “our prior”. If we think of ourselves as trying to approximate an ideal Bayesian reasoner, then it seems like part of the project is constructing an ideal prior to start with. EG, many people like Solomonoff’s prior. These people could be said to agree on a common prior in an important way. (Especially if they furthermore can agree on a UTM to use.)
But we can go further. Suppose that two people currently disagree about the Solomonoff prior. It’s plausible that they have reasons for doing so, which they can discuss. This involves some question-begging, since it assumes the kind of convergence that we’ve set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it. The point is that philosophical disagreements about priors can often be resolved, so even if two people can’t initially agree on the Solomonoff prior, we might still expect convergence on that point after sufficient discussion.
In this picture, the disagreement is all about the approximation, and not at all about non-common priors. If we could approximate ideal rationality better, we could agree.
Another argument in favor of a common-prior assumption is that even if we model people as starting out with different priors, we expect people to have experienced actually quite a lot of the world before they come together to discuss some specific disagreement. In your writing, you treat the different data as a reason for diverging opinions—but taking another perspective, we might argue that they’ve both experienced enough data that they should have broadly converged on a very large number of beliefs, EG about how things fall to the ground when unsupported, what things dissolve in water, how other humans tend to behave, et cetera.
We might broadly (imprecisely) argue that they’ve drawn different data from the same distribution, so after enough data, they should reach very similar conclusions.
Since “prior” is a relative term (every posterior acts as a prior for the next update), we could then argue that they’ve probably come to the current situation with very similar priors about that situation (that is, would have done if they’d been ideally rational bayesians the whole time) - even if they don’t agree on, say the Solomonoff prior.
The practical implication of this would be something like: when disagreeing, people actually know enough facts about the world to come to agree, if only they could properly integrate all the information.
Sure, but where does that lead? If they discuss it using basically the same epistemology, htey might agree, and if they have fundamentally epistemology, they probably. They could have a discussion about their infra epistemology, but then the same dichotomy re-occurs a t a deeper level. There’s no way of proving that two people who disagree can have a productive discussion that leads to agreement without assuming some measuer of pre-existing agreement at some level.
Yep.
That doesn’t imply the incoherence of the anti-agreement camp. Coherence is like that: it’s a rather weak condition, particularly in the sense that it can’t show there is a single coherent view. If you believe there is a single truth, you shouldn’t treat coherence as the sole criterion of truth.
But that doesn’t imply that they will converge without another question-begging assumption that they will interpret and weight the evidence similarly. One person regards the bible as evidence, another does not.
If one person always rejects another’s “data” that need not happen. You can have an infinite amount of data that is all of one type. Infinite in quantity doesn’t imply infinitely varied.
They need to agree on what counts as information (data, evidence) in the first place.
I basically think that agreement-bayes and non-agreement-bayes are two different models with various pros and cons. Both of them are high-error models in the sense that they model humans as an approximation of ideal rationality.
I think this is reasoning too loosely about a broad category of theories. An individual coherent view can coherently think there’s a unique truth. I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent. For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
Well, I have been having to guess what “coherence” means throughout.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not. ETA:The claim is not that Bayesian lack of convergence comes from Bayesian probablism, the claim is that it comes from starting with radically different priors, and only accepting updates that are consistent with them—the usual mechanism of coherentist non-convergence.
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong.
Sorry. My quote you were originally responding to:
By ‘coherence’ here, I simply meant non-contradictory-ness. Of course I can’t firmly establish that something is non-contradictory without some kind of consistency proof. What I meant was, in the paragraph in question, I’m only trying to sketch a possible view, to show some evidence that it can’t be easily dismissed. I wasn’t trying to discuss coherentism or invoke it in any way.
Not sure what you mean here.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views. Does this seem very roughly correct to you, or like a mischaracterization?
Of course, I didn’t say “corrupts ”. If you don’t engage in meta level reasoning , you won’t know what your object level reasoning is capable of, for better or worse. So you don’t get get to assume your object level reasoning is fine just because you’ve never thought about it. So meta level reasoning is revealing flaws, not creating them.
What matters is whether there is at least one view that works, that solves epistemology. If what you mean by “possible” is some lower bar than working fully and achieving all the desiderata, that’s not very interesting because everyone know there are multiple flawed theories.
If you can spell out an abstract rationality to achieve Agreement, and Completeness and Consistency, and. … then by all means do so. I have not seen it done yet.