One obstacle to understanding in this conversation seems to be that it involves the notion of “second-order probability”. That is, a probability is given to the proposition that some other proposition has a certain probability (or a probability within certain bounds).
As far as I know, this doesn’t make sense when only one epistemic agent is involved. An ideal Bayesian wouldn’t compute probabilities of the form p(x1 < p(A) < x2) for any proposition A.
Of course, if two agents are involved, then one can speak of “second-order probabilities”. One agent can assign a certain probability that the other agent assigns some probability. That is, if I use probability-function p, and you use probability function p*, then I might very well want to compute p(x1 < p*(A) < x2).
And the “two agents” here might be oneself at two different times, or one’s conscious self and one’s unconscious intuitive probability-assigning cognitive machinery.
From where I’m sitting, it looks like SilasBarta just needs to be clear that he’s using the coherent notion of “second-order probability”. Then the disagreement dissolves.
One obstacle to understanding in this conversation seems to be that it involves the notion of “second-order probability”.
Naw, that part’s cool. (I already had the idea of a meta-probability in my armamentarium.) The major obstacle to understanding was that we meant different things by the word “valid”.
As far as I know, this doesn’t make sense when only one epistemic agent is involved.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
Unless you think “What probability should I assign to A” is entirely a different sort of question than simply “What is p(A)”.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
I have plenty to learn about Bayesian agents, so I may be wrong. But I think that this would be a mixing of the object-language and the meta-language.
I’m supposing that a Bayesian agent evaluates probabilities p(A) where A is a sentence in a first-order logic L. So how would the agent evaluate the probability that it itself assigns a certain probability to some sentence?
We can certainly suppose that the agent’s domain of discourse D includes the numbers in the interval (0, 1) and the functions mapping sentences in L to the interval (0, 1). For each such function f let ‘f’ be a function-symbol for which f is the interpretion assigned by the agent. Similarly, for each number x in (0, 1), let ‘x’ be a constant-symbol for which x is the interpretation.
Now, how do we get the agent to evaluate the probability that p(A) = x? The natural thing to try might be to have the agent evaluate p(‘p’(A) = ‘x’). But the problem is that ‘p’(A) = ‘x’ is not a well-formed formula in L. Writing a sentence as the argument following a function symbol is not one of the valid ways to construct well-formed formulas.
One obstacle to understanding in this conversation seems to be that it involves the notion of “second-order probability”. That is, a probability is given to the proposition that some other proposition has a certain probability (or a probability within certain bounds).
As far as I know, this doesn’t make sense when only one epistemic agent is involved. An ideal Bayesian wouldn’t compute probabilities of the form p(x1 < p(A) < x2) for any proposition A.
Of course, if two agents are involved, then one can speak of “second-order probabilities”. One agent can assign a certain probability that the other agent assigns some probability. That is, if I use probability-function p, and you use probability function p*, then I might very well want to compute p(x1 < p*(A) < x2).
And the “two agents” here might be oneself at two different times, or one’s conscious self and one’s unconscious intuitive probability-assigning cognitive machinery.
From where I’m sitting, it looks like SilasBarta just needs to be clear that he’s using the coherent notion of “second-order probability”. Then the disagreement dissolves.
Naw, that part’s cool. (I already had the idea of a meta-probability in my armamentarium.) The major obstacle to understanding was that we meant different things by the word “valid”.
If you think there’s a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: “There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?”
Unless you think “What probability should I assign to A” is entirely a different sort of question than simply “What is p(A)”.
I have plenty to learn about Bayesian agents, so I may be wrong. But I think that this would be a mixing of the object-language and the meta-language.
I’m supposing that a Bayesian agent evaluates probabilities p(A) where A is a sentence in a first-order logic L. So how would the agent evaluate the probability that it itself assigns a certain probability to some sentence?
We can certainly suppose that the agent’s domain of discourse D includes the numbers in the interval (0, 1) and the functions mapping sentences in L to the interval (0, 1). For each such function f let ‘f’ be a function-symbol for which f is the interpretion assigned by the agent. Similarly, for each number x in (0, 1), let ‘x’ be a constant-symbol for which x is the interpretation.
Now, how do we get the agent to evaluate the probability that p(A) = x? The natural thing to try might be to have the agent evaluate p(‘p’(A) = ‘x’). But the problem is that ‘p’(A) = ‘x’ is not a well-formed formula in L. Writing a sentence as the argument following a function symbol is not one of the valid ways to construct well-formed formulas.