It seems like you’ve come to an agreement, so let me ruin things by adding my own interpretation.
The coin has some propensity to come up heads. Say it will in the long run come up heads r of the time. The number r is like a probability in that it satisfies the mathematical rules of probability (in particular the rate at which the coin comes up heads plus the rate at which it comes up tails must sum to one). But it’s a physical property of the coin; not anything to do with our opinion of it. The number r is just some particular number based on the shape of the coin (and the way it’s being tossed), it doesn’t change with our knowledge of the coin. So r isn’t a “probability” in the Bayesian sense—a description of our knowledge—it’s just something out there in the world.
Now if we have some Bayesian agent who doesn’t know r, then the must have some probability distribution over it. It could also be uncertain about the weight, w, and have a probability distribution over w. The distribuiton over r isn’t “meta-uncertainty” because it’s a distribution over a real physical thing in the world, not over our own internal probability assignments. The probability distribution over r is conceptually the same as the one over w.
Now suppose someone is about to flip the coin again. If we knew for certain what the value of r was we would then assign that same value as the probability of the coin coming up heads. If we don’t know for certain what r is then we must therefore average over all values of r according to our distribution. The probability of the coin landing heads is its expected value, E(r).
Now E(r) actually is a Bayesian probability—it is our degree of belief that the coin will come up heads. This transformation from r being a physical property to E(r) being a probability is produced by the particular question that we are asking. If we had instead asked about the probability of the coin denting the floor then this would depend on the weight and would be expressed as E(f(w)) for some function f representing how probable it was that the floor got dented at each weight. We don’t need a similar f in the case of r because we were free to choose the units of r so that this was unnecessary. If we had instead let r be the average number of heads in 1000 flips then we would have to have calculated the probability as E(f(r)) using f(r)=r/1000.
But the distribution over r does give you the extra information you wanted to describe. Coin 1 would have an r distribution tightly clustered around 1⁄2, whereas our distribution for Coin 2 would be more spread out. But we would have E(r) = 1⁄2 in both cases. Then, when we see more flips of the coins, our distributions change (although our distribution for Coin 1 probably doesn’t change very much; we are already quite certain) and we might no longer have that E(r_1) = E(r_2).
But it’s a physical property of the coin; not anything to do with our opinion of it.
Well, coin + environment, but sure, you’re making the point that r is not a random variable in the underlying reality. That’s fine, if we climb the turtles all the way down we’d find a a philosophical debate about whether the universe is deterministic and that’s not quite what we are interested in right now.
The distribuiton over r isn’t “meta-uncertainty” because it’s a distribution over a real physical thing in the world
I don’t think describing r as a “real physical thing” is useful in this context.
For example, we treat the outcome of each coin flip as stochastic, but you can easily make an argument that it is not, being a “real physical thing” instead, driven by deterministic physics.
For another example, it’s easy to add more meta-levels. Consider Alice forming a probability distribution of what Bob believes the probability distribution of r is...
This transformation from r being a physical property to E(r) being a probability is produced by the particular question that we are asking.
Isn’t r itself “produced by the particular question that we are asking”?
But the distribution over r does give you the extra information you wanted to describe.
That’s what I thought, too, and that disagreement led to this subthread.
But if we both say that we can easily talk about distributions of probabilities, we’re probably in agreement :-)
It seems like you’ve come to an agreement, so let me ruin things by adding my own interpretation.
The coin has some propensity to come up heads. Say it will in the long run come up heads r of the time. The number r is like a probability in that it satisfies the mathematical rules of probability (in particular the rate at which the coin comes up heads plus the rate at which it comes up tails must sum to one). But it’s a physical property of the coin; not anything to do with our opinion of it. The number r is just some particular number based on the shape of the coin (and the way it’s being tossed), it doesn’t change with our knowledge of the coin. So r isn’t a “probability” in the Bayesian sense—a description of our knowledge—it’s just something out there in the world.
Now if we have some Bayesian agent who doesn’t know r, then the must have some probability distribution over it. It could also be uncertain about the weight, w, and have a probability distribution over w. The distribuiton over r isn’t “meta-uncertainty” because it’s a distribution over a real physical thing in the world, not over our own internal probability assignments. The probability distribution over r is conceptually the same as the one over w.
Now suppose someone is about to flip the coin again. If we knew for certain what the value of r was we would then assign that same value as the probability of the coin coming up heads. If we don’t know for certain what r is then we must therefore average over all values of r according to our distribution. The probability of the coin landing heads is its expected value, E(r).
Now E(r) actually is a Bayesian probability—it is our degree of belief that the coin will come up heads. This transformation from r being a physical property to E(r) being a probability is produced by the particular question that we are asking. If we had instead asked about the probability of the coin denting the floor then this would depend on the weight and would be expressed as E(f(w)) for some function f representing how probable it was that the floor got dented at each weight. We don’t need a similar f in the case of r because we were free to choose the units of r so that this was unnecessary. If we had instead let r be the average number of heads in 1000 flips then we would have to have calculated the probability as E(f(r)) using f(r)=r/1000.
But the distribution over r does give you the extra information you wanted to describe. Coin 1 would have an r distribution tightly clustered around 1⁄2, whereas our distribution for Coin 2 would be more spread out. But we would have E(r) = 1⁄2 in both cases. Then, when we see more flips of the coins, our distributions change (although our distribution for Coin 1 probably doesn’t change very much; we are already quite certain) and we might no longer have that E(r_1) = E(r_2).
Well, coin + environment, but sure, you’re making the point that r is not a random variable in the underlying reality. That’s fine, if we climb the turtles all the way down we’d find a a philosophical debate about whether the universe is deterministic and that’s not quite what we are interested in right now.
I don’t think describing r as a “real physical thing” is useful in this context.
For example, we treat the outcome of each coin flip as stochastic, but you can easily make an argument that it is not, being a “real physical thing” instead, driven by deterministic physics.
For another example, it’s easy to add more meta-levels. Consider Alice forming a probability distribution of what Bob believes the probability distribution of r is...
Isn’t r itself “produced by the particular question that we are asking”?
Yes.