As far as I know, this doesn’t make sense when only one epistemic agent is involved.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
Unless you think “What probability should I assign to A” is entirely a different sort of question than simply “What is p(A)”.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
I have plenty to learn about Bayesian agents, so I may be wrong. But I think that this would be a mixing of the object-language and the meta-language.
I’m supposing that a Bayesian agent evaluates probabilities p(A) where A is a sentence in a first-order logic L. So how would the agent evaluate the probability that it itself assigns a certain probability to some sentence?
We can certainly suppose that the agent’s domain of discourse D includes the numbers in the interval (0, 1) and the functions mapping sentences in L to the interval (0, 1). For each such function f let ‘f’ be a function-symbol for which f is the interpretion assigned by the agent. Similarly, for each number x in (0, 1), let ‘x’ be a constant-symbol for which x is the interpretation.
Now, how do we get the agent to evaluate the probability that p(A) = x? The natural thing to try might be to have the agent evaluate p(‘p’(A) = ‘x’). But the problem is that ‘p’(A) = ‘x’ is not a well-formed formula in L. Writing a sentence as the argument following a function symbol is not one of the valid ways to construct well-formed formulas.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
Unless you think “What probability should I assign to A” is entirely a different sort of question than simply “What is p(A)”.
I have plenty to learn about Bayesian agents, so I may be wrong. But I think that this would be a mixing of the object-language and the meta-language.
I’m supposing that a Bayesian agent evaluates probabilities p(A) where A is a sentence in a first-order logic L. So how would the agent evaluate the probability that it itself assigns a certain probability to some sentence?
We can certainly suppose that the agent’s domain of discourse D includes the numbers in the interval (0, 1) and the functions mapping sentences in L to the interval (0, 1). For each such function f let ‘f’ be a function-symbol for which f is the interpretion assigned by the agent. Similarly, for each number x in (0, 1), let ‘x’ be a constant-symbol for which x is the interpretation.
Now, how do we get the agent to evaluate the probability that p(A) = x? The natural thing to try might be to have the agent evaluate p(‘p’(A) = ‘x’). But the problem is that ‘p’(A) = ‘x’ is not a well-formed formula in L. Writing a sentence as the argument following a function symbol is not one of the valid ways to construct well-formed formulas.