That doesn’t imply the incoherence of the anti-agreement camp.
I basically think that agreement-bayes and non-agreement-bayes are two different models with various pros and cons. Both of them are high-error models in the sense that they model humans as an approximation of ideal rationality.
Coherence is like that: it’s a rather weak condition, particularly in the sense that it can’t show there is a single coherent view. If you believe there is a single truth, you shouldn’t treat coherence as the sole criterion of truth.
I think this is reasoning too loosely about a broad category of theories. An individual coherent view can coherently think there’s a unique truth. I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent. For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
An individual coherent view can coherently think there’s a unique truth
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent.
Well, I have been having to guess what “coherence” means throughout.
For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not. ETA:The claim is not that Bayesian lack of convergence comes from Bayesian probablism, the claim is that it comes from starting with radically different priors, and only accepting updates that are consistent with them—the usual mechanism of coherentist non-convergence.
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong.
Well, I have been having to guess what “coherence” means throughout.
Sorry. My quote you were originally responding to:
This involves some question-begging, since it assumes the kind of convergence that we’ve set out to prove, but I am fine with resigning myself to illustrating the coherence of the pro-agreement camp rather than decisively arguing it.
By ‘coherence’ here, I simply meant non-contradictory-ness. Of course I can’t firmly establish that something is non-contradictory without some kind of consistency proof. What I meant was, in the paragraph in question, I’m only trying to sketch a possible view, to show some evidence that it can’t be easily dismissed. I wasn’t trying to discuss coherentism or invoke it in any way.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not.
Not sure what you mean here.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views. Does this seem very roughly correct to you, or like a mischaracterization?
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong
Of course, I didn’t say “corrupts ”. If you don’t engage in meta level reasoning , you won’t know what your object level reasoning is capable of, for better or worse. So you don’t get get to assume your object level reasoning is fine just because you’ve never thought about it. So meta level reasoning is revealing flaws, not creating them.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views.
What matters is whether there is at least one view that works, that solves epistemology. If what you mean by “possible” is some lower bar than working fully and achieving all the desiderata, that’s not very interesting because everyone know there are multiple flawed theories.
If you can spell out an abstract rationality to achieve Agreement, and Completeness and Consistency, and. … then by all means do so. I have not seen it done yet.
I basically think that agreement-bayes and non-agreement-bayes are two different models with various pros and cons. Both of them are high-error models in the sense that they model humans as an approximation of ideal rationality.
I think this is reasoning too loosely about a broad category of theories. An individual coherent view can coherently think there’s a unique truth. I mentioned in another comment somewhere that I think the best sort of coherence theory doesn’t just accept anything that’s coherent. For example, Bayesianism is usually classified as a coherence theory, with probabilistic compatibility of beliefs being a type of coherence. But Bayesian uncertainty about the truth doesn’t itself imply that there are many truths.
Not if it includes meta-level reasoning about coherence. For the reasons I have already explained.
Well, I have been having to guess what “coherence” means throughout.
Bayesians don’t expect that there are multiple truths, but can’t easily show that there are not. ETA:The claim is not that Bayesian lack of convergence comes from Bayesian probablism, the claim is that it comes from starting with radically different priors, and only accepting updates that are consistent with them—the usual mechanism of coherentist non-convergence.
To put it simply: I don’t get it. If meta-reasoning corrupts your object-level reasoning, you’re probably doing meta-reasoning wrong.
Sorry. My quote you were originally responding to:
By ‘coherence’ here, I simply meant non-contradictory-ness. Of course I can’t firmly establish that something is non-contradictory without some kind of consistency proof. What I meant was, in the paragraph in question, I’m only trying to sketch a possible view, to show some evidence that it can’t be easily dismissed. I wasn’t trying to discuss coherentism or invoke it in any way.
Not sure what you mean here.
Taking a step back from the details, it seems like what’s going on here is that I’m suggesting there are multiple possible views (IE we can spell out abstract rationality to support the idea of Agreement or to deny it), and you’re complaining about the idea of multiple possible views. Does this seem very roughly correct to you, or like a mischaracterization?
Of course, I didn’t say “corrupts ”. If you don’t engage in meta level reasoning , you won’t know what your object level reasoning is capable of, for better or worse. So you don’t get get to assume your object level reasoning is fine just because you’ve never thought about it. So meta level reasoning is revealing flaws, not creating them.
What matters is whether there is at least one view that works, that solves epistemology. If what you mean by “possible” is some lower bar than working fully and achieving all the desiderata, that’s not very interesting because everyone know there are multiple flawed theories.
If you can spell out an abstract rationality to achieve Agreement, and Completeness and Consistency, and. … then by all means do so. I have not seen it done yet.