All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
My question is what does “happiness” rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...
You argued that “I believe P with probability 0.53” might be as meaningless as “I am 53% happy”. It is a valid response to say, “Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox’s theorem.”
The pb here is that “I believe P” supposes a representation / a model of P. There must be a pre-existing model prior to using Cox’s theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).
All of those questions have known answers, but you have to take them on one at a time. Most of them go away when you switch from discrete (boolean) reasoning to continuous (probabilistic) reasoning.
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Each of those questions have several known and unknown answers...
Moreover the same questions applies to your preconception of continuity and probability. How could you know it applies to your inputs? For example: saying “I feel 53% happy” does not make sense, unless you think happiness has a definite meaning and is reducible to something measurable. Both are questionnable. Does any concept have a definite meaning? Maybe happiness has a “probabilistic” meaning? But what does it rest upon? How do you know that all your input is reducible to measurable constitutes, and how could you prove that?
Cox’s theorem. Probability reduces to set measure, which requires nothing but a small set of mathematical axioms.
My question is what does “happiness” rest upon? A probability of what? You need to have an apriori model oh what hapiness is in order to measure it (that is, a theory of mind), which you have not. Verifying your model depends on your model...
You argued that “I believe P with probability 0.53” might be as meaningless as “I am 53% happy”. It is a valid response to say, “Setting happiness aside, there actually is a rigorous foundation for quantifying belief—namely, Cox’s theorem.”
The pb here is that “I believe P” supposes a representation / a model of P. There must be a pre-existing model prior to using Cox’s theorem on something. My question is semantic: what does this model lie on? The probabilities you will get will depend on the model you will adopt, and I am pretty sure that there is no definitive model/conception of anything (see the problem of translation analysed by Quine for example).