Thanks for this, I found it quite clear and helpful.
The radical probabilist does not trust whatever they believe next. Rather, the radical probabilist has a concept of virtuous epistemic process, and is willing to believe the next output of such a process. Disruptions to the epistemic process do not get this sort of trust without reason. (For those familiar with The Abolition of Man, this concept is very reminiscent of his “Tao”.)
I had some uncertainty/confusion when reading this part: How does it follow from the axioms? Or is it merely permitted by the axioms? What constraints are there, if any, on what a radical probabilist’s subjective notion of virtuous process can be? Can there be a radical probabilist who has an extremely loose notion of virtue such that they do trust whatever they believe next?
It’s worth noting that in the case of logical induction, there’s a more fleshed-out story where the LI eventually has self-trust and can also come to believe probabilities produced by other LI processes. And, logical induction can come to trust outputs of other processes too. For LI, a “virtuous process” is basically one that satisfies the LI criterion, though of course it wouldn’t switch to the new set of beliefs unless they were known products of a longer amount of thought, or had proven themselves superior in some other way.
For LI, a “virtuous process” is basically one that satisfies the LI criterion,
I don’t think this is true. Two different logical inductors need not trust each other in general, even if one has had vastly longer to think, and so has developed “better” beliefs. They do have reason to trust each other eventually on empirical matters, IE, matters for which they get sufficient feedback. (I’m unfortunately relying an an unpublished theorem to assert that.) However, for undecidable sentences, I think there is no reason why one logical inductor should consider another to have “virtuous reasoning”, even if the other has thought for much longer.
What we can say is that a logical inductor eventually sees itself as reasoning virtuously. And, furthermore, that “itself” means as mathematically defined—it does not similarly trust “whatever the computer I’m running on happens to believe tomorrow”, since the computational process could be corrupted by e.g. a cosmic ray.
But for both a human and for a logical inductor, the epistemic process involves an interaction with the environment. Humans engage in discussion, read literature, observe nature. Logical inductors get information from the deductive process, which it trusts to be a source of truth. What distinguishes corrupt environmental influences from non-corrupt ones?
Thanks for this, I found it quite clear and helpful.
I had some uncertainty/confusion when reading this part: How does it follow from the axioms? Or is it merely permitted by the axioms? What constraints are there, if any, on what a radical probabilist’s subjective notion of virtuous process can be? Can there be a radical probabilist who has an extremely loose notion of virtue such that they do trust whatever they believe next?
I’m pulling some slight of hand here: it doesn’t follow from the axioms. I’m talking about what I see as the sensible interpretation of the axioms.
Good job noticing that confusion.
I think that section is pretty important; I stand by what I said, but it definitely needs to be developed more.
It’s worth noting that in the case of logical induction, there’s a more fleshed-out story where the LI eventually has self-trust and can also come to believe probabilities produced by other LI processes. And, logical induction can come to trust outputs of other processes too. For LI, a “virtuous process” is basically one that satisfies the LI criterion, though of course it wouldn’t switch to the new set of beliefs unless they were known products of a longer amount of thought, or had proven themselves superior in some other way.
I don’t think this is true. Two different logical inductors need not trust each other in general, even if one has had vastly longer to think, and so has developed “better” beliefs. They do have reason to trust each other eventually on empirical matters, IE, matters for which they get sufficient feedback. (I’m unfortunately relying an an unpublished theorem to assert that.) However, for undecidable sentences, I think there is no reason why one logical inductor should consider another to have “virtuous reasoning”, even if the other has thought for much longer.
What we can say is that a logical inductor eventually sees itself as reasoning virtuously. And, furthermore, that “itself” means as mathematically defined—it does not similarly trust “whatever the computer I’m running on happens to believe tomorrow”, since the computational process could be corrupted by e.g. a cosmic ray.
But for both a human and for a logical inductor, the epistemic process involves an interaction with the environment. Humans engage in discussion, read literature, observe nature. Logical inductors get information from the deductive process, which it trusts to be a source of truth. What distinguishes corrupt environmental influences from non-corrupt ones?