Circular Reasoning

The opinions here are my own, but owe some salience-work to Sahil.

The idea that circular reasoning is bad is widespread. However, this reputation is undeserved. While circular reasoning should not be convincing (at least not usually), it should also not be considered invalid.

Circular Reasoning is Valid

The first important thing to note is that circular reasoning is logically valid. A implies A. Longer circular arguments (like A implies B implies A) may introduce an invalidity (perhaps A does not imply B); but if so, invalidity is not due to the circularity.

If circularity itself is to be critiqued, it must be by some other standard than logical validity.

I think it’s fair to say that the most relevant objection to valid circular arguments is that they are not very good at convincing someone who does not already accept the conclusion. You are talking to another person, and need to think about communicating with their perspective. Perhaps the reason circular arguments are a common ‘problem’ is because they are valid. People naturally think about what should be a convincing argument from their own perspective, rather than the other person’s.

However, notice that this objection to circular reasoning assumes that one party is trying to convince the other. This is arguments-as-soldiers mindset.[1] If two people are curiously exploring each other’s perspectives, then circular reasoning could be just fine!

Furthermore, I’ll claim: valid circular arguments should actually be considered as a little bit of positive evidence for their positions!

Let’s look at a concrete example. I don’t think circular arguments are quite so simple as “A implies A”; the circle is usually a bit longer. So, consider a more realistic circular position:[2]

Alice: Why do you believe in God?

Bob: I believe in God based on the authority of the Bible.

Alice: Why do you believe what the Bible says?

Bob: Because the Bible was divinely inspired by God. God is all-knowing and good, so we can trust what God says.

Here we have a two-step loop, A->B and B->A. The arguments are still logically fine; if the Bible tells the truth, and the Bible says God exists, then God exists. If the Bible were divinely inspired by an all-knowing and benevolent God, then it is reasonable to conclude that the Bible tells the truth.

If Bob is just honestly going through his own reasoning here (as opposed to trying to convince Alice), then it would be wrong for Alice to call out Bob’s circular reasoning as an error. The flaw in circular reasoning is that it doesn’t convince anyone; but that’s not what Bob is trying to do. Bob is just telling Alice what he thinks.

If Alice thinks Bob is mistaken, and wants to point out the problems in Bob’s beliefs, it is better for Alice to contest the premises of Bob’s arguments rather than contest the reasoning form. Pointing out circularity only serves to remind Bob that Bob hasn’t given Alice a convincing argument.

You probably still think Bob has made some mistake in his reasoning, if these are his real reasons. I’ll return to this later.

Circular Arguments as Positive Evidence

I claimed that valid circular arguments should count as a little bit of evidence in favor of their conclusions. Why?

Imagine that the Bible claimed itself to be written by an evil and deceptive all-knowing God, instead of a benign God:

Alice: Why do you believe in God?

Bob: Because the Bible tells me so.

Alice: Why do you believe the Bible?

Bob: Well… uh… huh.

Sometimes, belief systems are not even internally consistent. You’ll find a contradiction[3] just thinking through the reasoning that is approved of by the belief system itself. This should make you disbelieve the thing.

Therefore, by the rule we call conservation of expected evidence, reasoning through a belief system and deriving a conclusion consistent with the premise you started with should increase your credence. It provides some evidence that there’s a consistent hypothesis here; and consistent hypotheses should get some credence, EG, proportional to their complexity.[4]

Inevitable Circularity

After all this, you might say something like: sure, circular arguments are valid, but they don’t provide justification for beliefs. Bob is still wrong to answer “why” questions in a circular way.[5]

The “Regress Argument” in epistemology goes as follows:

  • Every belief requires justification.

  • However, any justification must itself rest on other beliefs.

  • Therefore, any chain of justification must lead to an infinite regress. (By ‘infinite regress’ I mean either a circle or an infinite nonrepeating chain.)

  • However, an infinite regress does not count as a working justification, either.

  • Therefore, no proper justification can be given.

The point is, you have to live with at least one of:[6]

  1. Some beliefs do not normatively require justification;

  2. Some justifications do not rest on beliefs;

  3. Some justification chains allowed to be circular;

  4. Some justification chains are allowed to be infinite and non-repeating;

  5. No beliefs are permissible.

There are a few common schools of thought on this:

Foundationalism:[7] There are special “foundational” beliefs. These beliefs might not need justification, or are uniquely open to circular justification (EG, can be justified by themselves), or are uniquely justifiable in a way that does not rest on further beliefs (EG, they are justified by virtue of being true). All other beliefs need to be justified in ways that obey all the axioms of the regress argument.

Coherentism: Circular justification is allowed in some fashion.

Infinitism: Infinite chains of justification are allowed. This position seems rare.

Foundherentism: Some combination of foundationalism and coherentism. Foundationalists concede that foundations are still open to questioning, and thus, nontrivial justifying beliefs. Coherentists concede that some beliefs are more foundational than others. This seems to be the most popular position.

Overall, I would endorse some variety of foundherentism. My main point in this essay, however, is to argue the coherentist part. When I first encountered the regress argument (in undergraduate philosophy), I strongly identified as a foundationalist. I suspect many LessWrongers will have similar feelings. However, in retrospect I think it’s pretty clear that any foundations are also subject to justificatory work, and the sort of justification needed is of the same kind as is needed for everything else. Therefore, coherentism.[8]

To get one misunderstanding out of the way: coherence is not a criterion in the sense of something that can tell you what is true, or what to believe. Just as circular arguments can’t convince you of their conclusions, coherence can’t tell you which coherent perspective to take. It’s more of a rationality condition. Nor is it saying that all coherent positions are equally true, or equally good; only equally justified, from their own perspectives.

Brief Aside on Other Circles

John Wentworth has a thematically similar post on circular definitions, which have a similarly bad reputation. He similarly points out that they can be mathematically fine. We could also argue as above, that if you have the belief that everything should be definable, and also that definitions should not be circular, you’ll run into trouble eventually.

But then what do you say to Bob?

Compare to: but then what do you say to the republican?

I think the temptation to outlaw circular arguments as a form of justification comes mainly from trying to construct an objective third-person perspective to judge disagreements. In other words, the concept “justification” is doing double duty. We cannot consistently use it for both honest philosophical examination and constructing arguments to persuade others.

As I said earlier, I think the right thing to do is to question the premises of a circular argument (which of course also means questioning the conclusion), rather than objecting to the argument form.

It is also worth pointing out and avoiding double-counting of evidence, which can result from a form of circular reasoning.

Don’t Double Count

One way to think about “re-examining beliefs” is that you somehow blot out a specific belief, and then re-estimate it from your other beliefs.

In the case of Bayesian networks, this intuition can be formalized as the Belief Propagation algorithm.

In Probabilistic Reasoning in Intelligent Systems, Pearl sets up an analogy between his Belief Propagation algorithm and people who are passing messages in order to count the total number of people. To cut out as many details as possible while conveying the basic idea: you can follow a simple procedure where each person adds 1 to a piece of paper (counting themselves), but this only works if you avoid loops (which could cause people to count themselves twice). The analogy between this procedure and Belief Propagation justifies thinking about probabilistic inference as “counting” evidence. In the context of Belief Propagation, circular reasoning can result in double-counting of evidence (or triple-counting, quadruple-counting, etc; but “double counting” has become a common phrase to point at over-counting of evidence.)

So, if you have some degree of credence in the Bible, this might propagate to give you some degree of credence in God, and vice versa; but if you let this reasoning form a self-reinforcing loop, there’s a problem.

Reflective Loops

The content of this essay is rather similar to Where Recursive Justification Hits Bottom. However, Eliezer tries to draw a distinction between the form of reasoning he is accepting and circular logic:

So, at the end of the day, what happens when someone keeps asking me “Why do you believe what you believe?”

At present, I start going around in a loop at the point where I explain, “I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.”

But then… haven’t I just licensed circular logic?

Actually, I’ve just licensed reflecting on your mind’s degree of trustworthiness, using your current mind as opposed to something else.

He proceeds to check for parallels between the circular line of reasoning he in fact approves of, and other circular arguments which he disapproves of:

Is this the same as the one who says, “I believe that the Bible is the word of God, because the Bible says so”?

Couldn’t they argue that their blind faith must also have been placed in them by God, and is therefore trustworthy?

However, on my reading, he mostly fails to point out a structural difference between the reasoning, and instead ends up taking issue with the specific claims of the religious perspective. Although he argues that circular reasoning doesn’t work as a way of arriving at knowledge which you don’t already have, he ultimately seems to realize he hasn’t fully specified how to distinguish the reasoning which he endorses from circular reasoning:

Everything, without exception, needs justification. Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops. I do think that reflective loops have a meta-character which should enable one to distinguish them, by common sense, from circular logics. But anyone seriously considering a circular logic in the first place, is probably out to lunch in matters of rationality; and will simply insist that their circular logic is a “reflective loop” even if it consists of a single scrap of paper saying “Trust me”. Well, you can’t always optimize your rationality techniques according to the sole consideration of preventing those bent on self-destruction from abusing them.

On my reading, Eliezer is offering two main suggestions:

  1. “Hold nothing back”: you must reason using all the information at your disposal. The naive way to avoid problematic circular reasoning is to try to be a “philosopher of perfect emptiness” who “assumes nothing”. However, there are no universally compelling arguments. If you assume nothing, you can conclude nothing. Furthermore, it is incorrect to try and figure out the truth using anything less than your full knowledge. The correct way to avoid the problematic sort of circular argument is not to avoid assuming the consequent, but rather, to use the fullness of your knowledge to re-evaluate your assumptions.

  2. Acceptable circular arguments can be contrasted with unacceptable circular arguments via some “meta-character” which Eliezer points at using the term “reflective loop”. Based on his examples, I think what Eliezer means here is that “reflective loop” arguments are the ones which go through the epistemology itself: Eliezer’s examples always pass through some fact about the process by which he reaches conclusions. The epistemological beliefs are justified in a way which fans back out to a broader picture of the world (eg, facts about physics or evolution may be invoked in the justification of epistemic practices). These broad considerations would again be justified by epistemological claims.

In terms of my earlier classification, the second idea seems to be a form of foundationalism: epistemological considerations play a special foundational role, and circular arguments are OK only if they pass through these foundational statements. Foundational statements need to be justified in a special way;[9] EG we can’t say that a reasoning step is reliable because its specific conclusion is correct (even though this inference might be perfectly valid). Instead, we are supposed to consider it as an instance of a class. This seems to be the same sort of outside-view reasoning which Eliezer argues for in Ethical Injunctions and argues against in Inadequate Equilibria.

On the other hand, the first point seems to be saying that circular arguments are nothing special, and you always need to evaluate things in the same way (namely, using all your information, which results in some circles forming). So perhaps my reading of the advice is a bit contradictory.

In any case, it’s at least clear that there is some concession to circles, although it isn’t clear exactly how far that concession goes.

Conclusion

Circular reasoning is valid, and moreover, circular justifications seem necessary in practice. The foundationalist project of building all justifications on some centralized foundation is perhaps productive, but as foundations also need justification, circular justifications or infinite justification chains are mathematically inevitable, and circles seem inevitable in practice. Justification must therefore be seen as a coherence condition, which throws out some belief systems as irrational, but does not point to a unique correct belief system. Specific circular arguments should be critiqued on the merits of their assumptions and reasoning steps, just like any other arguments, rather than by the circularity itself.


You can support me on Patreon.

  1. ^

    Well, perhaps this statement is a bit too strong. Mathematicians avoid circular reasoning when proving theorems, but I wouldn’t accuse them of making an arguments-as-soldiers mistake.

    However, if you’re curiously inquiring about someone’s belief system, and they employ circular reasoning, and then you object to their reasoning on the grounds that it is circular, then I’d accuse you of making an arguments-as-soldiers mistake. Their reasoning doesn’t have to be designed to convince you.

  2. ^

    This example is also used in Where Recursive Justification Hits Bottom.

  3. ^

    Here, I include “probabilistic contradiction”, such as a deceptive God writing a truthful Bible. It isn’t actually logically inconsistent, just improbable.

  4. ^

    Notice that the circularity of the argument doesn’t play a special role here. Any path through the structure of the belief system could find a contradiction. So any illustration of how the belief system reasons which doesn’t illustrate a contradiction should raise your credence in the belief system (at least slightly), if you expected that there might be a contradiction along that path before checking.

  5. ^

    You might want to say something like:

    Reader: Sure, Bob’s position might be logically consistent. If Bob was born with a prior strongly favoring the Bible and God then yeah, maybe Bob could technically be perfectly rational here. But no one is born with that prior! Alice is asking Bob why Bob believes these things. It might be accurate as a matter of historical record that Bob came to believe in God due to reading the Bible. But if so, it cannot also be that he believed what the Bible said because it was written by God! Either he came to believe one thing first, or the other (or both at once, which also invalidates the story that each belief caused the other).

    I would agree that, in practice, Bob has made a mistake. However, I don’t think it makes sense to focus so much on the descriptive explanation of how Bob came to believe a thing.

    Alice’s question of “why” Bob believes a thing is ambiguous; Bob could be answering with the descriptive, historical reason why he came to a conclusion, but he could also be answering in a normative way, giving his currently endorsed explanation of why he should believe.

    For example, I once heard a philosophy student say that he originally endorsed a philosophy (presentism) because it felt similar to other things he believed. However, his adviser told him that was not a very good reason to believe it.

    It may be the case that this is where the story ended for this particular philosophy student (IE, they were simply admitting that they had non-normative reasons for their belief). However, imagine that the student’s advisor gave him better reasons for the beliefs, which the student came to normatively endorse.

    The cause of the student’s beliefs would still be “I thought presentism felt similar to other things I believed.” But the student’s justification for the belief could have changed.

    So: if Bob might be talking about justification (normative rather than historical “reasons” for beliefs), then the complaint-from-the-reader I postulated above seems misplaced.

    You might adjust your complaints about Bob to account for this distinction:

    Reader: Sure, Bob’s position might be logically consistent. However, beliefs need to be justified! Bob cannot be answering the historical-why, as previously argued. So I interpret Bob as attempting to offer a normative-why. However, Bob fails to do this, because normative-why answers aren’t allowed to contain cycles any more than historical-why answers are. Although there is no logical fallacy here, there is nonetheless a fallacy of justification: justifications are not allowed to be circular.

    If this reply is tempting to you, then you are making the central mistake this post is trying to argue against. Your requirements for “justification” may seem locally reasonable, but they imply a global problem.

  6. ^

    Mathematics does quite well with just #1 and #2. In the context of mathematics, cases of #1 are “axioms”—the designated set of propositions which can be assumed without any justification, in the service of justifying everything else. However, mathematical justifications need to be made up of inference rules as well as axioms (otherwise, we don’t know how to check whether one statement is the consequence of another). This can give rise to examples of #2, as it is sometimes possible to prove things without any axioms (by using only the inference rules).[10]

    However, mathematical proofs are about convincing another party of something they don’t already accept. Also, mathematics is quite comfortable to always operate in the hypothetical—it’s fine for axioms to remain unjustified, because later (when we apply the math) we will check whether the axioms hold true (or, true enough) for particular cases.

    In a less hypothetical mode of reasoning, more about what’s true than what can be defended, I think it makes sense to reject #1 and #2 (reasserting that we desire, normatively, for all our beliefs to be justifiable), but embrace #3 (and perhaps #4). We can’t pursue the justification of all beliefs at all times, but we do want all of our beliefs to be open to re-examination; therefore, I think we need to be open to an infinite regress.

    My reasons for endorsing an infinite regress, rather than postulating that the chain ends somewhere, are practical: I’m open to the idea of codifying excellent foundational theories (such as Bayesianism, or classical logic, or set theory, or what-have-you) which justify a huge variety of beliefs. However, it seems to me that in practice, such a foundation needs its own justification. We’re not going to find a set of axioms which just seem obvious to all humans once articulated. Rather, there’s some work to be done to make them seem obvious.

    Overall, though, I suppose I endorse a sort of pluralism about justification: “justification” can work different ways in different contexts, and the seeming paradox in the Regress Argument results from an overly simplistic and unrealistic view of what justification is.

  7. ^

    Gordon Worley calls this particularism, presumably because some philosophers use the term.

  8. ^

    Coherentism also fits much better with the probabilist worldview; notice that the basic rationality arguments for Bayesians are “coherence theorems” rather than, say, soundness and completeness theorems.

  9. ^

    I get the feeling that Eliezer is gesturing in the direction of Tiling. The emphasis on the general efficacy of epistemic principles rather than their efficacy in a special case resembles Vingean reflection. I think for Eliezer, a full analysis of the Tiling Agents problem (EG, something like a reflective Bayesian with a full Vingean Tiling result justifying its cognition to itself) would also be a complete response to the regress argument, showing exactly how agents should think about their own self-justification.

  10. ^

    An axiom a=a is interchangeable with an inference rule which lets us conclude a=a. Axioms and inference rules can also be interchangeable in more complex ways. Therefore, the distinction between #1 and #2 here does not seem very important. The wikipedia article on the regress argument mentions no such distinction; I’m being extra pedantic by including it.

    One example where I might want to invoke this distinction is sensory perception. It might be sensible to say that a belief about direct perception is justified not by some belief, but rather by the fact that you have perceived it. In symbols, X justifies P(X)=1 in cases where X causes P(X)=1 (or at least, in some such cases).

  11. ^
  12. ^

    Gordon Worley discusses “the problem of the criterion”, which seems to me like a version of the regress argument which is worse in that it confuses several issues (see my comment for some issues I think it confuses). However, his discussion of possible solutions to the problem seems on-point for the regress argument.