One thing to say about negation is that often the model uncertainty is concentrated in the negation. Any probability estimate, say of A (vs. not-A) always has a third option: MU=”(Model Uncertainty) I’m confused, maybe the question doesn’t make sense, maybe A isn’t a coherent claim, maybe the concepts I used aren’t the right concepts to use, maybe I didn’t think of a possibility, etc. etc.”.
I tend to think of writing my propositions in notepad like A: 75% B: 34% C: 60%
And so on. Are you telling me that “~A: 75%” means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect ‘A: 75%’ to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can’t a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?
I tend to think of writing my propositions in notepad like
A: 75%
B: 34%
C: 60%
And so on. Are you telling me that “~A: 75%” means not only that ~A has a 75% likelihood of being true, but also that A vs ~A has a 25% chance of being the wrong question? If that was true, I would expect ‘A: 75%’ to mean not only that A was true with a 75% likelihood, but also that A vs ~A is the right question with 75% likelihood (high model certainty). But can’t a proposition be more or less confused/flawed on multiple different metrics, to someone who understands what this whole A/~A business is all about?