Thanks, that’s an interesting point, but I don’t think it can get a probability of being true or false because that would imply that the underlying concept can somehow be demonstrated true or false.
If it can be demonstrated true or false, then it’s logical impossibility would be 0%, because anything that is testable and has yet to be tested has a possibility of being true or false, however small that may be, or else it’s self-evident (100%).
Else, it cannot be demonstrated true or false, in which case the logical impossibility is 100%.
But the whole point of the original post was that you had logical uncertainty. That’s why in Bayesian reasoning you can’t have probabilites of 0 and 1 - to allow for the possibility, however small, of updating.
Thanks, I don’t really understand where I was coming from before. Maybe my new found understanding of the terminology around uncertainty has finally updated my intuitions :)
Thanks, that’s an interesting point, but I don’t think it can get a probability of being true or false because that would imply that the underlying concept can somehow be demonstrated true or false.
If it can be demonstrated true or false, then it’s logical impossibility would be 0%, because anything that is testable and has yet to be tested has a possibility of being true or false, however small that may be, or else it’s self-evident (100%).
Else, it cannot be demonstrated true or false, in which case the logical impossibility is 100%.
But the whole point of the original post was that you had logical uncertainty. That’s why in Bayesian reasoning you can’t have probabilites of 0 and 1 - to allow for the possibility, however small, of updating.
See also: How to convince me that 2+2 =3
Thanks, I don’t really understand where I was coming from before. Maybe my new found understanding of the terminology around uncertainty has finally updated my intuitions :)