Yes. Because, we’re trying to express uncertainty about the consequences of axioms. Not about axioms themselves.
common_law’s thinking does seem to be something people actually do. Like, we’re uncertain about the consequences of the laws of physics, while simultaneously being uncertain of the laws of physics, while simultaneously being uncertain if we’re thinking about it in a logical way. But, it’s not the kind of uncertainty that we’re trying to model, in the applications I’m talking about. The missing piece in these applications are probabilities conditional on axioms.
Philosophically, I want to know how you calculate the rational degree of belief in every proposition.
If you automatically assign the axioms an actually unobtainable certainty, you don’t get the rational degree of belief in every proposition, as the set of “propositions” includes those not conditioned on the axioms.
Hmm. Yeah, that’s tough. What do you use to calculate probabilities of the principles of logic you use to calculate probabilities?
Although, it seems to me that a bigger problem than the circularity is that I don’t know what kinds of things are evidence for principles of logic. At least for the probabilities of, say, mathematical statements, conditional on the principles of logic we use to reason about them, we have some idea. Many consequences of a generalization being true are evidence for a generalization, for example. A proof of an analogous theorem is evidence for a theorem. So I can see that the kinds of things that are evidence for mathematical statements are other mathematical statements.
I don’t have nearly as clear a picture of what kinds of things lead us to accept principles of logic, and what kind of statements they are. Whether they’re empirical observations, principles of logic themselves, or what.
Hmm? If these are physically or empirically meaningful axioms, we can apply regular probability to them. Now, the laws of logic and probability themselves might pose more of a problem. I may worry about that once I can conceive of them being false.
Yes. Because, we’re trying to express uncertainty about the consequences of axioms. Not about axioms themselves.
common_law’s thinking does seem to be something people actually do. Like, we’re uncertain about the consequences of the laws of physics, while simultaneously being uncertain of the laws of physics, while simultaneously being uncertain if we’re thinking about it in a logical way. But, it’s not the kind of uncertainty that we’re trying to model, in the applications I’m talking about. The missing piece in these applications are probabilities conditional on axioms.
If you automatically assign the axioms an actually unobtainable certainty, you don’t get the rational degree of belief in every proposition, as the set of “propositions” includes those not conditioned on the axioms.
Hmm. Yeah, that’s tough. What do you use to calculate probabilities of the principles of logic you use to calculate probabilities?
Although, it seems to me that a bigger problem than the circularity is that I don’t know what kinds of things are evidence for principles of logic. At least for the probabilities of, say, mathematical statements, conditional on the principles of logic we use to reason about them, we have some idea. Many consequences of a generalization being true are evidence for a generalization, for example. A proof of an analogous theorem is evidence for a theorem. So I can see that the kinds of things that are evidence for mathematical statements are other mathematical statements.
I don’t have nearly as clear a picture of what kinds of things lead us to accept principles of logic, and what kind of statements they are. Whether they’re empirical observations, principles of logic themselves, or what.
Hmm? If these are physically or empirically meaningful axioms, we can apply regular probability to them. Now, the laws of logic and probability themselves might pose more of a problem. I may worry about that once I can conceive of them being false.