Just to be clear . . . going back to your first paragraph, that 0.5 is a prior probability for the outcome of one draw from the urn (that is, for the random variable that equals 1 if the ball is red and 0 if the ball is white). But, as you point out, 0.5 is not a prior probability for the series of ten draws. What you’re calling a “prior” would typically be called a “model” by statisticians. Bayesians traditionally divide a model into likelihood, prior, and hyperprior, but as you implicitly point out, the dividing line between these is not clear: ultimately, they’re all part of the big model.
Eliezer ,
Just to be clear . . . going back to your first paragraph, that 0.5 is a prior probability for the outcome of one draw from the urn (that is, for the random variable that equals 1 if the ball is red and 0 if the ball is white). But, as you point out, 0.5 is not a prior probability for the series of ten draws. What you’re calling a “prior” would typically be called a “model” by statisticians. Bayesians traditionally divide a model into likelihood, prior, and hyperprior, but as you implicitly point out, the dividing line between these is not clear: ultimately, they’re all part of the big model.