Circularity isn’t just a criterion, it’s one of the unholy trinity of criteria …particularism, regress and Circularity .. that are know to have problems. The problem with Circularity is that it justifies too much, not too little.
My take on “justifies too much” is that coherentism shouldn’t just say coherence is the criterion, in the sense of anything coherent is allowed to fly. Instead, the (subjectively) correct approach is (something along the lines of) coherence with the rest of my beliefs.
What do you mean by coherence?
Coherence is one of the six response types cited in the OP. I was also using ‘circularity’ and ‘coherence’ somewhat synonymously. Sorry for the confusion.
When I took undergrad philosophy courses, “coherentism” and “foundationalism” were two frequently-used categories representing opposite approaches to justifiation/knowledge. Foundationalism is the idea that everything should be built up from some kind of foundation (so here we could classify it as particularism, probably). Coherentism is basically the negation of that, but is generally assumed to replace building-up-from-foundations with some notion of coherence. Neither approach is very well-specified; they’re supposed to be general clusters of approaches, as opposed to full views.
More specifically, with respect to my previous statement:
Another one of them is the infinite-recursion problem Gordon mentions (also known as the regress argument). This argument rests on the foundationalist intuitions that all beliefs need to be justified, and infinite chains of justification (such as circular justification) don’t count.
Coherence seems to also qualify as a possible answer to this version of the criterion problem, addressing the problem by pointing to which of the contradictory assumptions we should give up.
Here, we can interpret coherentism specifically as the acceptance of circular logic as valid (the inference “A, therefore A” is a valid one, for example—note that ‘validity’ is about the argument from premise to conclusion, and does not imply accepting the conclusion). This resolves the infinite regress problem by accepting regress, whereas particularism instead rejects the assumption that every statement needs a justification.
Circularity isn’t just a criterion, it’s one of the unholy trinity of criteria …particularism, regress and Circularity .. that are know to have problems. The problem with Circularity is that it justifies too much, not too little.
Coherence is either circularity, in which case it has the same problems; or something else, in which case it needs to be spelt out.
I don’t think EY has spelt out a fourth criterion that works. Where Regress Stops is particularism.
I think the details of these views matter a whole lot. Which kind of particularism? How is it justified? Does the motivation make sense? Or which coherentism, etc.
The arguments about the Big Three are vague and weak due to the lack of detail. Chisholm makes his attacks on each approach, which sound convincing as-stated, but which seem much less relevant when you’re dealing with a fully-fleshed-out view with specific arguments in its favor.
(But note that for Chisholm, the three are: particularism, methodism, skepticism. So his categorization is different from yours.)
You classify Eliezer as particularism, I see it as a type of circular/coherentist view, Gordon Worley sees need to invent a “pragmatism” category. The real answer is that these categories are vague and Eliezer’s view is what it is, regardless of how you label it.
Also note that I prefer to claim Eliezer’s view is essentially correct without claiming that it solves the problem of the criterion, since I find the criterion problem to be worryingly vague.
My take on “justifies too much” is that coherentism shouldn’t just say coherence is the criterion, in the sense of anything coherent is allowed to fly. Instead, the (subjectively) correct approach is (something along the lines of) coherence with the rest of my beliefs.
That’s how coherence usually works. If no new belief is accepted unless consistent with existing beliefs, then you don’t get as much quodlibet as under pure circular justification, but you don’t get convergence on a single truth either.
You classify Eliezer as particularism, I see it as a type of circular/coherentist view, Gordon Worley sees need to invent a “pragmatism” category.
Having re-read Where Recursive Justfication Hits Bottom, I think EY says different things in different places. If he is saying that our Rock Bottom assumptions are actually valid, that’s particularism. If he is saying that we are stuck with them, however bad they are it’s not particularism, and not a solution to epistemology.
He also offers a kind of circular defense of induction, which I don’t think amounts to full fledged circularity,because you need some empirical data to kick things off. The Circular Justifcation of Induction isn’t entirely bad, but it’s yet another partial solution, because it’s limited to the empirical and quantifiable. If you explicitly limit everyone to the empirical and quantifiable, that’s logical.positivism, but EY say he’s isn’t a logical positivist.
Also note that I prefer to claim Eliezer’s view is essentially correct without claiming that it solves the problem of the criterion, since I find the criterion problem to be worryingly vague
What are his views, and what is it correct about?
Having read them, I still dont know.
I think the details of these views matter a whole lot. Which kind of particularism? How is it justified? Does the motivation make sense? Or which coherentism, etc
It matters both ways. The solution needs to be clear, if there is one.
“Usually” being the key here. To me, the most interesting coherence theories are broadly bayesian in character.
but you don’t get convergence on a single truth either.
I’m not sure what position you’re trying to take or what argument you’re trying to make here—do you think there’s a correct theory which does have the property of convergence on a single truth? Do you think convergence on a single truth is a critical feature of a successful theory?
I don’t think it’s possible to converge on the truth in all cases, since information is limited—EG, we can’t decide all undecidable mathematical statements, even with infinite time to think. Because this is a fundamental limit, though, it doesn’t seem like a viable charge against an individual theory.
If he is saying that our Rock Bottom assumptions are actually valid, that’s particularism. If he is saying that we are stuck with them, however bad they are it’s not particularism, and not a solution to epistemology.
I don’t think he’s saying either of those things exactly. He is saying that we can question our rock bottom assumptions in the same way that we question other things. We are not stuck with them because we can change our mind about them based on this deliberation. However, this deliberation had better use our best ideas about how to validate or reject ideas (which is, in a sense, circular when what we are analyzing is our best ideas about how to validate or reject ideas). Quoting from Eliezer:
So what I did in practice, does not amount to declaring a sudden halt to questioning and justification. I’m not halting the chain of examination at the point that I encounter Occam’s Razor, or my brain, or some other unquestionable. The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques. What else could I possibly use?
And later in the essay:
If one of your current principles does come up wanting—according to your own mind’s examination, since you can’t step outside yourself—then change it! And then go back and look at things again, using your new improved principles.
See, he’s not saying they are actually valid, and he’s not saying we’re stuck with them. He’s just recommending using your best understanding in the moment, to move forward.
however bad they are it’s not particularism, and not a solution to epistemology.
I’m also not sure what “a solution to epistemology” means.
Quoting from Eliezer:
But this doesn’t answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
So I think it’s fair to say that Eliezer is trying to answer the regress argument. That’s the specific question he is trying to answer. Observing that his position is “not a solution to epistemology” does little to recommend against it.
He also offers a kind of circular defense of induction, which I don’t think amounts to full fledged circularity,because you need some empirical data to kick things off.
I think another subtlety which might differentiate it from what’s ordinarily called circular reasoning is a use/mention distinction. He’s recommending using your best reasoning principles, not assuming them like axioms. This is what I had in mind at the beginning when I said that his essay didn’t hand the reader a procedure to distinguish his recommendation from the bad kind of circular reasoning—a use/mention distinction seems like a plausible analysis of the difference, but he at least doesn’t emphasize it. Instead, he seems to analyze the situation as this-particular-case-of-circular-logic-works-fine:
So, at the end of the day, what happens when someone keeps asking me “Why do you believe what you believe?”
At present, I start going around in a loop at the point where I explain, “I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.”
But then… haven’t I just licensed circular logic?
Actually, I’ve just licensed reflecting on your mind’s degree of trustworthiness, using your current mind as opposed to something else.
I take this position to be accurate so far as it goes, but somewhat lacking in providing a firm detector for bad vs good circular logic. Eliezer is clearly aware of this:
I do think that reflective loops have a meta-character which should enable one to distinguish them, by common sense, from circular logics. But anyone seriously considering a circular logic in the first place, is probably out to lunch in matters of rationality; and will simply insist that their circular logic is a “reflective loop” even if it consists of a single scrap of paper saying “Trust me”. Well, you can’t always optimize your rationality techniques according to the sole consideration of preventing those bent on self-destruction from abusing them.
“Usually” being the key here. To me, the most interesting coherence theories are broadly bayesian in character.
Bayesianism is even more explicit about the need for compatibility with existing beliefs, I’m priors.
I’m not sure what position you’re trying to take or what argument you’re trying to make here
I don’t think there is a single theory that achieves every desideratum (including minimality of unjustified assumptions). Ie. Epistemology is currently unsolved.
do you think there’s a correct theory which does have the property of convergence on a single truth?
I think convergence is a desideratum.
I don’t know of a theory that achieves all desiderata. Theres any number of trivial theories that can converge, but do nothing else.
I don’t think it’s possible to converge on the truth in all cases, since information is limited—EG, we can’t decide all undecidable mathematical statements, even with infinite time to think. Because this is a fundamental limit, though, it doesn’t seem like a viable charge against an individual theory.
That’s not the core problem: there are reasons to believe that convergence can’t be achieved, even if everyone has access to the same finite pool of information. The problem of the criterion is one of them… if there is fundamental disagreement about the nature of truth and evidence, then agents that fundamentally differ won’t converge in finite time.
“Usually” being the key here. To me, the most interesting coherence theories are broadly bayesian in character.
Bayesianism is even more explicit about the need for compatibility with existing beliefs, ie. priors.
I’m not sure what position you’re trying to take or what argument you’re trying to make here
I don’t think there is a single theory that achieves every desideratum (including minimality of unjustified assumptions). Ie. Epistemology is currently unsolved.
do you think there’s a correct theory which does have the property of convergence on a single truth?
I think convergence is a desideratum.
I don’t know of a theory that achieves all desiderata. Theres any number of trivial theories that can converge, but do nothing else. There’s lots of partial theories as well, but it’s not clear how to make the tradeoffs.
I don’t think it’s possible to converge on the truth in all cases, since information is limited—EG, we can’t decide all undecidable mathematical statements, even with infinite time to think. Because this is a fundamental limit, though, it doesn’t seem like a viable charge against an individual theory.
That’s not the core problem: there are reasons to believe that convergence can’t be achieved, even if everyone has access to the same finite pool of information. The problem of the criterion is one of them… if there is fundamental disagreement about the nature of truth and evidence, then agents that fundamentally differ won’t converge in finite time.
We are not stuck with them because we can change our mind about them based on this deliberation. However, this deliberation had better use our best ideas about how to validate or reject ideas (which is, in a sense, circular when what we are analyzing is our best ideas about how to validate or reject ideas
Yes...the circularity of the method weighs against convergence in the outcome.
He is saying that we can question our rock bottom assumptions in the same way that we question other things. We are not stuck with them because we can change our mind about them based on this deliberation. However, this deliberation had better use our best ideas about how to validate or reject ideas
Out best ideas relatively might not be good enough absolutely. In that passage he is sounding like a Popperism, but the Popperism approach is particularly unable to achieve convergence.
“What else could I possibly use?”
Doesn’t imply that what he must use is any good in absolute terms.
I don’t think there is a single theory that achieves every desideratum (including minimality of unjustified assumptions). Ie. Epistemology is currently unsolved.
I was never arguing against this. I broadly agree. However, I also think it’s a poor problem frame, because “solve epistemology” is quite vague. It seems better to be at least somewhat more precise about what problems one is trying to solve.
Out best ideas relatively might not be good enough absolutely. In that passage he is sounding like a Popperism, but the Popperism approach is particularly unable to achieve convergence.
Well, just because he says some of the things that Popper would say, doesn’t mean he says all of the things that Popper would say.
do you think there’s a correct theory which does have the property of convergence on a single truth?
I think convergence is a desideratum. I don’t know of a theory that achieves all desiderata. Theres any number of trivial theories that can converge, but do nothing else. There’s lots of partial theories as well, but it’s not clear how to make the tradeoffs.
I don’t think it’s possible to converge on the truth in all cases, since information is limited—EG, we can’t decide all undecidable mathematical statements, even with infinite time to think. Because this is a fundamental limit, though, it doesn’t seem like a viable charge against an individual theory.
That’s not the core problem: there are reasons to believe that convergence can’t be achieved, even if everyone has access to the same finite pool of information. The problem of the criterion is one of them… if there is fundamental disagreement about the nature of truth and evidence, then agents that fundamentally differ won’t converge in finite time.
If you’ve got an argument that a desideratum can’t be achieved, don’t you want to take a step back and think about what’s achievable? In the quoted section above, it seems like I offer one argument that convergence isn’t achievable, and you pile on more, but you still stick to a position something like, we should throw out theories that don’t achieve it?
Bayesianism is even more explicit about the need for compatibility with existing beliefs, ie. priors.
That’s why I’m using it as an example of coherentism.
IDK what the problem is here, but it seems to me like there’s some really weird disconnect happening in this conversation, which keeps coming back in full force despite our respective attempts to clarify things to each other.
I was never arguing against this. I broadly agree.
But you also said that:-
Also note that I prefer to claim Eliezer’s view is essentially correct
Correct about what? That he has solved epistemology, or that epistemology is unsolved, or what to do in the absence of a solution? Remember , the standard rationalist claim is that epistemology is solved by Bayes. That’s the claim that people like Gordon and David Chapman are arguing against. If you say you agree with Yudkowsky, that is what people are going to assume you mean.
However, I also think it’s a poor problem frame, because “solve epistemology” is quite vague.
I just told you told you what that means “a single theory that achieves every desideratum (including minimality of unjustified assumptions)”
Also note that I prefer to claim Eliezer’s view is essentially correct without claiming that it solves the problem of the criterion, since I find the criterion problem to be worryingly vague
So Yudkowsky is essentially correct about ….something… but not necessarily about the thing this discussion is about.
Well, just because he says some of the things that Popper would say, doesn’t mean he says all of the things that Popper would say.
He says different things in different places, as I said. He says different things in different places, so hes unclear.
If you’ve got an argument that a desideratum can’t be achieved, don’t you want to take a step back and think about what’s achievable
I don’t think all desiderata are achievable by one theory. That’s my precise reason for thinking that epistemology is unsolved.
but you still stick to a position something like, we should throw out theories that don’t achieve it?
I didn’t say that. I haven’t even got into the subject of what to do given the failure of epistemology to meet all its objectives.
What I was talking about specifically was the inability of Bayes to achieve convergence. You seem to disagree, because you were talking about “agreement Bayes”.
Also note that I prefer to claim Eliezer’s view is essentially correct
Correct about what? That he has solved epistemology, or that epistemology is unsolved, or what to do in the absence of a solution? Remember , the standard rationalist claim is that epistemology is solved by Bayes. That’s the claim that people like Gordon and David Chapman are arguing against. If you say you agree with Yudkowsky, that is what people are going to assume you mean.
I already addressed this in a previous comment:
I’m also not sure what “a solution to epistemology” means.
Quoting from Eliezer:
But this doesn’t answer the legitimate philosophical dilemma: If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?
So I think it’s fair to say that Eliezer is trying to answer the regress argument.
I am trying to be specific about my claims. I feel like you are trying to pin very general and indefensible claims on me. I am not endorsing everything Eliezer says; I am endorsing the specific essay.
However, I also think it’s a poor problem frame, because “solve epistemology” is quite vague.
I just told you told you what that means “a single theory that achieves every desideratum (including minimality of unjustified assumptions)”
Is there some full list of desiderata which has broad agreement or which you otherwise want to defend??
I feel like any good framing of “solve epistemology” should start by being more precise than “solve epistemology”, wrt what problem or problems are being approached.
but you still stick to a position something like, we should throw out theories that don’t achieve it?
I didn’t say that. I haven’t even got into the subject of what to do given the failure of epistemology to meet all its objectives.
Agreement that you didn’t say that. It’s merely my best attempt at interpretation. I was trying to ask a question about what you were trying to say. It seems to me like one thing you have been trying to do in this conversation is dismiss coherentism as a possible answer, on the argument that it doesn’t satisfy some specific criteria, in particular truth-convergence. I’m arguing that truth-convergence should clearly be thrown out as a criterion because it’s impossible to satisfy (although it can be revised into more feasible criteria). On my understanding, you yourself seemed to agree about its infeasibility, although you seemed to think we should focus on different arguments about why it is infeasible (you said that my argument misses the point). But (on my reading) you continued to seem to reject coherentism on the same argument, namely the truth-convergence problem. So I am confused about your view.
I don’t think all desiderata are achievable by one theory. That’s my precise reason for thinking that epistemology is unsolved.
To reiterate: then why would they continue to be enshrined as The Desiderata?
So Yudkowsky is essentially correct about ….something… but not necessarily about the thing this discussion is about.
I mean, looking back, it was me who initially agreed with the OP [original post], in a top-level comment, that Eliezer’s essay was essentially correct. You questioned that, and I have been defending my position… and now it’s somehow off-topic?
Correct about what? That he has solved epistemology, or that epistemology is unsolved, or what to do in the absence of a solution?
I already addressed this in a previous comment
I don’t see where.
Is there some full list of desiderata which has broad agreement or which you otherwise want to defend??
Certainty. A huge issue in early modern philosophy which has now been largely abandoned.
Completeness. Everything is either true or false, nothing is neither.
Consistency. Nothing is both true and false.
Convergence. Everyone can agree.
Objectivity: Everyone can agree on something that’s actually true
It seems to me like one thing you have been trying to do in this conversation is dismiss coherentism as a possible
I dismissed it as an answer that fulfills all criteria, because it doesn’t fulfil Convergence. I didn’t use the word possible—you did in another context, but I couldn’t see what you meant. If nothing fulfils all criteria, then coherentism could be preferable to approaches with other flaws.
To reiterate: then why would they continue to be enshrined as The Desiderata?
Because all of them individually can be achieved if you make trade offs.
Because things don’t stop being desireable when they are unavailable.
I’m arguing that truth-convergence should clearly be thrown out as a criterion because it’s impossible to satisfy
That isn’t at all clear.
Lowering the bar to whatever you can jump over ,AKA Texas sharpshooting,isn’t a particularly rational procedure. The absolute standard you are or are not hitting relates to what you can consistently claim on the object level: without the possibility convergence on objective truth, you can’t claim that people with other beliefs are irrational or wrong, (at least so long as they hit some targets).
It’s still important to make relative comparisons even if you can’t hit the absolute standard...but it’s also important to remember your relatively best theory is falling short of absolute standards.
Completeness: I don’t need everything to be either true or false, I just need everything to have a justifiable probability.
That’s also Completeness.
Convergence+ObjectivityConvergence
Giving up on convergence has practical consequences. To be Consistent, you need to give up on the stance that your views are a slam dunk, and the other tribe’s views are indefensible.
My take on “justifies too much” is that coherentism shouldn’t just say coherence is the criterion, in the sense of anything coherent is allowed to fly. Instead, the (subjectively) correct approach is (something along the lines of) coherence with the rest of my beliefs.
Coherence is one of the six response types cited in the OP. I was also using ‘circularity’ and ‘coherence’ somewhat synonymously. Sorry for the confusion.
When I took undergrad philosophy courses, “coherentism” and “foundationalism” were two frequently-used categories representing opposite approaches to justifiation/knowledge. Foundationalism is the idea that everything should be built up from some kind of foundation (so here we could classify it as particularism, probably). Coherentism is basically the negation of that, but is generally assumed to replace building-up-from-foundations with some notion of coherence. Neither approach is very well-specified; they’re supposed to be general clusters of approaches, as opposed to full views.
More specifically, with respect to my previous statement:
Here, we can interpret coherentism specifically as the acceptance of circular logic as valid (the inference “A, therefore A” is a valid one, for example—note that ‘validity’ is about the argument from premise to conclusion, and does not imply accepting the conclusion). This resolves the infinite regress problem by accepting regress, whereas particularism instead rejects the assumption that every statement needs a justification.
I think the details of these views matter a whole lot. Which kind of particularism? How is it justified? Does the motivation make sense? Or which coherentism, etc.
The arguments about the Big Three are vague and weak due to the lack of detail. Chisholm makes his attacks on each approach, which sound convincing as-stated, but which seem much less relevant when you’re dealing with a fully-fleshed-out view with specific arguments in its favor.
(But note that for Chisholm, the three are: particularism, methodism, skepticism. So his categorization is different from yours.)
You classify Eliezer as particularism, I see it as a type of circular/coherentist view, Gordon Worley sees need to invent a “pragmatism” category. The real answer is that these categories are vague and Eliezer’s view is what it is, regardless of how you label it.
Also note that I prefer to claim Eliezer’s view is essentially correct without claiming that it solves the problem of the criterion, since I find the criterion problem to be worryingly vague.
That’s how coherence usually works. If no new belief is accepted unless consistent with existing beliefs, then you don’t get as much quodlibet as under pure circular justification, but you don’t get convergence on a single truth either.
Having re-read Where Recursive Justfication Hits Bottom, I think EY says different things in different places. If he is saying that our Rock Bottom assumptions are actually valid, that’s particularism. If he is saying that we are stuck with them, however bad they are it’s not particularism, and not a solution to epistemology.
He also offers a kind of circular defense of induction, which I don’t think amounts to full fledged circularity,because you need some empirical data to kick things off. The Circular Justifcation of Induction isn’t entirely bad, but it’s yet another partial solution, because it’s limited to the empirical and quantifiable. If you explicitly limit everyone to the empirical and quantifiable, that’s logical.positivism, but EY say he’s isn’t a logical positivist.
What are his views, and what is it correct about? Having read them, I still dont know.
It matters both ways. The solution needs to be clear, if there is one.
“Usually” being the key here. To me, the most interesting coherence theories are broadly bayesian in character.
I’m not sure what position you’re trying to take or what argument you’re trying to make here—do you think there’s a correct theory which does have the property of convergence on a single truth? Do you think convergence on a single truth is a critical feature of a successful theory?
I don’t think it’s possible to converge on the truth in all cases, since information is limited—EG, we can’t decide all undecidable mathematical statements, even with infinite time to think. Because this is a fundamental limit, though, it doesn’t seem like a viable charge against an individual theory.
I don’t think he’s saying either of those things exactly. He is saying that we can question our rock bottom assumptions in the same way that we question other things. We are not stuck with them because we can change our mind about them based on this deliberation. However, this deliberation had better use our best ideas about how to validate or reject ideas (which is, in a sense, circular when what we are analyzing is our best ideas about how to validate or reject ideas). Quoting from Eliezer:
And later in the essay:
See, he’s not saying they are actually valid, and he’s not saying we’re stuck with them. He’s just recommending using your best understanding in the moment, to move forward.
I’m also not sure what “a solution to epistemology” means.
Quoting from Eliezer:
So I think it’s fair to say that Eliezer is trying to answer the regress argument. That’s the specific question he is trying to answer. Observing that his position is “not a solution to epistemology” does little to recommend against it.
I think another subtlety which might differentiate it from what’s ordinarily called circular reasoning is a use/mention distinction. He’s recommending using your best reasoning principles, not assuming them like axioms. This is what I had in mind at the beginning when I said that his essay didn’t hand the reader a procedure to distinguish his recommendation from the bad kind of circular reasoning—a use/mention distinction seems like a plausible analysis of the difference, but he at least doesn’t emphasize it. Instead, he seems to analyze the situation as this-particular-case-of-circular-logic-works-fine:
I take this position to be accurate so far as it goes, but somewhat lacking in providing a firm detector for bad vs good circular logic. Eliezer is clearly aware of this:
Bayesianism is even more explicit about the need for compatibility with existing beliefs, I’m priors.
I don’t think there is a single theory that achieves every desideratum (including minimality of unjustified assumptions). Ie. Epistemology is currently unsolved.
I think convergence is a desideratum. I don’t know of a theory that achieves all desiderata. Theres any number of trivial theories that can converge, but do nothing else.
That’s not the core problem: there are reasons to believe that convergence can’t be achieved, even if everyone has access to the same finite pool of information. The problem of the criterion is one of them… if there is fundamental disagreement about the nature of truth and evidence, then agents that fundamentally differ won’t converge in finite time.
Bayesianism is even more explicit about the need for compatibility with existing beliefs, ie. priors.
I don’t think there is a single theory that achieves every desideratum (including minimality of unjustified assumptions). Ie. Epistemology is currently unsolved.
I think convergence is a desideratum. I don’t know of a theory that achieves all desiderata. Theres any number of trivial theories that can converge, but do nothing else. There’s lots of partial theories as well, but it’s not clear how to make the tradeoffs.
That’s not the core problem: there are reasons to believe that convergence can’t be achieved, even if everyone has access to the same finite pool of information. The problem of the criterion is one of them… if there is fundamental disagreement about the nature of truth and evidence, then agents that fundamentally differ won’t converge in finite time.
Yes...the circularity of the method weighs against convergence in the outcome.
Out best ideas relatively might not be good enough absolutely. In that passage he is sounding like a Popperism, but the Popperism approach is particularly unable to achieve convergence.
Doesn’t imply that what he must use is any good in absolute terms.
I was never arguing against this. I broadly agree. However, I also think it’s a poor problem frame, because “solve epistemology” is quite vague. It seems better to be at least somewhat more precise about what problems one is trying to solve.
Well, just because he says some of the things that Popper would say, doesn’t mean he says all of the things that Popper would say.
If you’ve got an argument that a desideratum can’t be achieved, don’t you want to take a step back and think about what’s achievable? In the quoted section above, it seems like I offer one argument that convergence isn’t achievable, and you pile on more, but you still stick to a position something like, we should throw out theories that don’t achieve it?
That’s why I’m using it as an example of coherentism.
IDK what the problem is here, but it seems to me like there’s some really weird disconnect happening in this conversation, which keeps coming back in full force despite our respective attempts to clarify things to each other.
But you also said that:-
Correct about what? That he has solved epistemology, or that epistemology is unsolved, or what to do in the absence of a solution? Remember , the standard rationalist claim is that epistemology is solved by Bayes. That’s the claim that people like Gordon and David Chapman are arguing against. If you say you agree with Yudkowsky, that is what people are going to assume you mean.
I just told you told you what that means “a single theory that achieves every desideratum (including minimality of unjustified assumptions)”
So Yudkowsky is essentially correct about ….something… but not necessarily about the thing this discussion is about.
He says different things in different places, as I said. He says different things in different places, so hes unclear.
I don’t think all desiderata are achievable by one theory. That’s my precise reason for thinking that epistemology is unsolved.
I didn’t say that. I haven’t even got into the subject of what to do given the failure of epistemology to meet all its objectives.
What I was talking about specifically was the inability of Bayes to achieve convergence. You seem to disagree, because you were talking about “agreement Bayes”.
I already addressed this in a previous comment:
I am trying to be specific about my claims. I feel like you are trying to pin very general and indefensible claims on me. I am not endorsing everything Eliezer says; I am endorsing the specific essay.
Is there some full list of desiderata which has broad agreement or which you otherwise want to defend??
I feel like any good framing of “solve epistemology” should start by being more precise than “solve epistemology”, wrt what problem or problems are being approached.
Agreement that you didn’t say that. It’s merely my best attempt at interpretation. I was trying to ask a question about what you were trying to say. It seems to me like one thing you have been trying to do in this conversation is dismiss coherentism as a possible answer, on the argument that it doesn’t satisfy some specific criteria, in particular truth-convergence. I’m arguing that truth-convergence should clearly be thrown out as a criterion because it’s impossible to satisfy (although it can be revised into more feasible criteria). On my understanding, you yourself seemed to agree about its infeasibility, although you seemed to think we should focus on different arguments about why it is infeasible (you said that my argument misses the point). But (on my reading) you continued to seem to reject coherentism on the same argument, namely the truth-convergence problem. So I am confused about your view.
To reiterate: then why would they continue to be enshrined as The Desiderata?
I mean, looking back, it was me who initially agreed with the OP [original post], in a top-level comment, that Eliezer’s essay was essentially correct. You questioned that, and I have been defending my position… and now it’s somehow off-topic?
I don’t see where.
Certainty. A huge issue in early modern philosophy which has now been largely abandoned.
Completeness. Everything is either true or false, nothing is neither.
Consistency. Nothing is both true and false.
Convergence. Everyone can agree.
Objectivity: Everyone can agree on something that’s actually true
I dismissed it as an answer that fulfills all criteria, because it doesn’t fulfil Convergence. I didn’t use the word possible—you did in another context, but I couldn’t see what you meant. If nothing fulfils all criteria, then coherentism could be preferable to approaches with other flaws.
Because all of them individually can be achieved if you make trade offs.
Because things don’t stop being desireable when they are unavailable.
That isn’t at all clear.
Lowering the bar to whatever you can jump over ,AKA Texas sharpshooting,isn’t a particularly rational procedure. The absolute standard you are or are not hitting relates to what you can consistently claim on the object level: without the possibility convergence on objective truth, you can’t claim that people with other beliefs are irrational or wrong, (at least so long as they hit some targets).
It’s still important to make relative comparisons even if you can’t hit the absolute standard...but it’s also important to remember your relatively best theory is falling short of absolute standards.
Except for consistency, none of these seem actually desirable as requirements.
Certainty: Don’t need it, I’m fine with probabilities.
Completeness: I don’t need everything to be either true or false, I just need everything to have a justifiable probability.
Consistency: Agree.
Convergence+Objectivity: No Universally Compelling Arguments
That’s also Completeness.
Giving up on convergence has practical consequences. To be Consistent, you need to give up on the stance that your views are a slam dunk, and the other tribe’s views are indefensible.