Arguably, assigning a particular floating point number between 0.0 and 1.0 to represent subjective degrees of belief is a specialized skill and it could take years of practice in order to become fluent in numerical-probability-speak.* Another possibility is that it merely adds a kind of pseudo-precision without any benefit over natural language.
In any case, it seems to be an empirical question and so should be answered with empirical data. I guess we won’t really know until we have a good-sized number of people using things such as PredictionBook for extended periods of time. I’ll keep you posted.
Potest legistis linguam Latinam? If not, then you might want to read The Science of Conjecture: Evidence and Probability before Pascal by James Franklin for an overview of the tradition I was referring to.
Arguably, assigning a particular floating point number between 0.0 and 1.0 to represent subjective degrees of belief is a specialized skill and it could take years of practice in order to become fluent in numerical-probability-speak.* Another possibility is that it merely adds a kind of pseudo-precision without any benefit over natural language.
In any case, it seems to be an empirical question and so should be answered with empirical data. I guess we won’t really know until we have a good-sized number of people using things such as PredictionBook for extended periods of time. I’ll keep you posted.
*There does exist rigorously defined verbal probabilities, but as far as I know they haven’t been used much since the Late Middle Ages/Early Modern Period.
I’d like to see more on those verbal probabilities, having stated to use my own since few satisfactory existing versions.
Potest legistis linguam Latinam? If not, then you might want to read The Science of Conjecture: Evidence and Probability before Pascal by James Franklin for an overview of the tradition I was referring to.
I’ll give it a look.