“The ball is blue” only gets assigned a probability by your prior when “blue” is interpreted, not as a word that you don’t understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn’t previously know about, plus the one number you do know about.
Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both “what are my beliefs about words that I don’t understand used in a sentence” and “what are my beliefs about physics I don’t understand yet.” This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
To me the conversational part of this seems way less complicated/interesting than the unknown causal models part—if I have any ‘philosophical’ confusion about how to treat unknown strings of English letters it is not obvious to me what it is.
Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both “what are my beliefs about words that I don’t understand used in a sentence” and “what are my beliefs about physics I don’t understand yet.” This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
To me the conversational part of this seems way less complicated/interesting than the unknown causal models part—if I have any ‘philosophical’ confusion about how to treat unknown strings of English letters it is not obvious to me what it is.