I have nothing against justifications being circular (IE the same ideas recurring on many levels), just as I have nothing in principle against finding a foundationalist explanation. A circular argument is just a particularly simple form of infinite regress.
But my main argument against only the circular reasoning explanation is that attempted versions of it (“coherentist” positions) don’t seem very good when you get down to details.
Pure coherentist positions tend to rely on a stipulated notion of coherence (such as probabilistic coherence, or weighted constraint satisfaction, or something along those lines). These notions are themselves fixed. This could be fine if the coherence notions were sufficiently “assumption-lite” so as to not be necessarily Goodhart-prone etc, but so far it doesn’t seem that way to me.
I’m predicting that you’ll agree with me on that, and grant that the notion of coherence should itself be up for grabs. I don’t actually think the coherentist/foundationalist/infinitist trilemma is that good a characterization of our disagreement here. My claim isn’t so much the classical claim that there’s an infinite regress of justification, as much as a claim that there’s an infinite regress of uncertainty—that we’re uncertain at all the levels, and need to somehow manage that. This fits the ship-of-theseus picture just fine.
In other words, one can unroll a ship of theseus into an infinite hierarchy where each level says something about how the next level down gets re-adjusted over time. The reason for doing this is to achieve the foundationalist goal of understanding the system better, without the foundationalist method of fixing foundational assumptions. The main motive here is amplification. Taking just a ship of theseus, it’s not obvious how to make it better besides running it forward faster (and even this has its risks, since the ship may become worse). If we unroll the hierarchy of wanting-to-become better, we can EG see what is good and bad about merely running it forward faster, and try to run it forward in good ways rather than bad ways (as well as other, more radical departures from simple fast-forward amplification).
One disagreement I have with your story is the argument “given the finitude of human brain architecture”. The justification of a belief/norm/algorithm needn’t be something already present in the head. A lot of what we do is given to us by evolution. We can notice those things and question whether they make sense by our current standards. Calling this process finite is kind of like calling a Turing machine finite. There’s a finite core to it, but we can be surprised by what this core does given more working tape.
Why not both? ;3
I have nothing against justifications being circular (IE the same ideas recurring on many levels), just as I have nothing in principle against finding a foundationalist explanation. A circular argument is just a particularly simple form of infinite regress.
But my main argument against only the circular reasoning explanation is that attempted versions of it (“coherentist” positions) don’t seem very good when you get down to details.
Pure coherentist positions tend to rely on a stipulated notion of coherence (such as probabilistic coherence, or weighted constraint satisfaction, or something along those lines). These notions are themselves fixed. This could be fine if the coherence notions were sufficiently “assumption-lite” so as to not be necessarily Goodhart-prone etc, but so far it doesn’t seem that way to me.
I’m predicting that you’ll agree with me on that, and grant that the notion of coherence should itself be up for grabs. I don’t actually think the coherentist/foundationalist/infinitist trilemma is that good a characterization of our disagreement here. My claim isn’t so much the classical claim that there’s an infinite regress of justification, as much as a claim that there’s an infinite regress of uncertainty—that we’re uncertain at all the levels, and need to somehow manage that. This fits the ship-of-theseus picture just fine.
In other words, one can unroll a ship of theseus into an infinite hierarchy where each level says something about how the next level down gets re-adjusted over time. The reason for doing this is to achieve the foundationalist goal of understanding the system better, without the foundationalist method of fixing foundational assumptions. The main motive here is amplification. Taking just a ship of theseus, it’s not obvious how to make it better besides running it forward faster (and even this has its risks, since the ship may become worse). If we unroll the hierarchy of wanting-to-become better, we can EG see what is good and bad about merely running it forward faster, and try to run it forward in good ways rather than bad ways (as well as other, more radical departures from simple fast-forward amplification).
One disagreement I have with your story is the argument “given the finitude of human brain architecture”. The justification of a belief/norm/algorithm needn’t be something already present in the head. A lot of what we do is given to us by evolution. We can notice those things and question whether they make sense by our current standards. Calling this process finite is kind of like calling a Turing machine finite. There’s a finite core to it, but we can be surprised by what this core does given more working tape.
This is clarifying, thanks.
WRT the last paragraph, I’m thinking in terms of convergent vs divergent processes. So , fixed points I guess.