circular justifications seem necessary in practice
I didn’t see any arguments which point to that unless you mean the regress argument / disjunction
Yes, I agree: the essay doesn’t really contain a significant argument for this point. “Seem necessary in practice” is more of an observation, a statement of how things seem to me.
The closest thing to a positive argument for the conclusion is this:
However, in retrospect I think it’s pretty clear that any foundations are also subject to justificatory work, and the sort of justification needed is of the same kind as is needed for everything else. Therefore, coherentism.
And this, which is basically the same argument:
My reasons [...] are practical: I’m open to the idea of codifying excellent foundational theories (such as Bayesianism, or classical logic, or set theory, or what-have-you) which justify a huge variety of beliefs. However, it seems to me that in practice, such a foundation needs its own justification. We’re not going to find a set of axioms which just seem obvious to all humans once articulated. Rather, there’s some work to be done to make them seem obvious.
I also cite Eliezer stating a similar conclusion:
Everything, without exception, needs justification. Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops.
I think it’s pretty clear that any foundations are also subject to justificatory work
EV is the boss turtle at the bottom of the turtle stack. Dereferencing justification involves a boss battle.
there’s some work to be done to make them seem obvious
There’s work to show how justification for further things follows from a place where EV is in the starting assumptions, but not to take on EV as an assumption in the first place, as people have EV-calculatingness built into their behaviors as can be noticed to them.
Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops
I reject this! Justification results from EV calculations and only EV calculations, as trust values for assumptions in contexts.
Yes, I agree: the essay doesn’t really contain a significant argument for this point. “Seem necessary in practice” is more of an observation, a statement of how things seem to me.
The closest thing to a positive argument for the conclusion is this:
And this, which is basically the same argument:
I also cite Eliezer stating a similar conclusion:
EV is the boss turtle at the bottom of the turtle stack. Dereferencing justification involves a boss battle.
There’s work to show how justification for further things follows from a place where EV is in the starting assumptions, but not to take on EV as an assumption in the first place, as people have EV-calculatingness built into their behaviors as can be noticed to them.
I reject this! Justification results from EV calculations and only EV calculations, as trust values for assumptions in contexts.