I think it would be helpful, when dealing with such foundational topics, to tabu “justification”, “validity”, “reason” and some related terms. It is too easy to stop the reduction there, and forget to check what are their cause and function in our self-reflecting epistemic algorithm.
The question shouldn’t be whether circular arguments are “valid” or give me “good reason to believe”, but whether I may edit the parts of my algorithm that handle circular arguments, and as a result expect (according to my current algorithm) to have stronger conviction in more true things.
Your bayesian argument, that if the claim was false the circle is likely it to end in contradiction- I find convincing, because I am already convinced to endorse this form of bayesian reasoning. Because as a normative it has properties that I have already learned to make sense according to earlier heuristics that were hopefully good. Including the heuristic that my heuristics are sometimes bad and I want to be reasonably robust to that fact. Also, that this principle may not be implemented absolutely without sacrificing other things that I care about more.
Yeah, I would have liked to dig much deeper into what in the world[1] “justification” points at, but I thought the post would get too long and complex for the simple point being made.
Would very much like to read such a post. I have the basic intuition that it is a soft form of “witness” (as in complexity/cryptography), but it is not very developed.
I think it would be helpful, when dealing with such foundational topics, to tabu “justification”, “validity”, “reason” and some related terms. It is too easy to stop the reduction there, and forget to check what are their cause and function in our self-reflecting epistemic algorithm.
The question shouldn’t be whether circular arguments are “valid” or give me “good reason to believe”, but whether I may edit the parts of my algorithm that handle circular arguments, and as a result expect (according to my current algorithm) to have stronger conviction in more true things.
Your bayesian argument, that if the claim was false the circle is likely it to end in contradiction- I find convincing, because I am already convinced to endorse this form of bayesian reasoning. Because as a normative it has properties that I have already learned to make sense according to earlier heuristics that were hopefully good. Including the heuristic that my heuristics are sometimes bad and I want to be reasonably robust to that fact. Also, that this principle may not be implemented absolutely without sacrificing other things that I care about more.
Yeah, I would have liked to dig much deeper into what in the world[1] “justification” points at, but I thought the post would get too long and complex for the simple point being made.
(I mean: what thing-in-the-world is being pointed at; what are the real phenomena behind “justification”; why we use such a concept)
Would very much like to read such a post. I have the basic intuition that it is a soft form of “witness” (as in complexity/cryptography), but it is not very developed.