Could we think of the connotation of a sentence as related to the Bayesian evidence about the speaker when get from the fact that he is the sort of person who would say that sentence?
For example, ‘there are differences in ability between races’ has the connotation of normative racism, because normative racists are more likely to utter it.
That would be a good way to implement connotation in an AI. I don’t think we are that accurate. To retrieve sufficient Bayesian evidence for general reasoning regarding B on being reminded of A, supposing you already had P(A), you’d have to retrieve any 2 of P(A|B), P(B|A), and P(B), right? But most current models assume concept activation retrieves one number per related concept (activation level), not two.
Could we think of the connotation of a sentence as related to the Bayesian evidence about the speaker when get from the fact that he is the sort of person who would say that sentence?
For example, ‘there are differences in ability between races’ has the connotation of normative racism, because normative racists are more likely to utter it.
That would be a good way to implement connotation in an AI. I don’t think we are that accurate. To retrieve sufficient Bayesian evidence for general reasoning regarding B on being reminded of A, supposing you already had P(A), you’d have to retrieve any 2 of P(A|B), P(B|A), and P(B), right? But most current models assume concept activation retrieves one number per related concept (activation level), not two.