Your last paragraph is wrong. Here’s an excruciatingly detailed explanation.
That does clarify what you originally meant. However, this still seems “rather suspicious”—due to the 1.0:
if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X.
I’m willing to bite the bullet here because all hell breaks loose if I don’t. We don’t know how a Bayesian agent can ever function if it’s allowed (and therefore required) to doubt arbitrary mathematical statements, including statements about its own algorithm, current contents of memory, arithmetic, etc. It seems easier to just say 1.0 as a stopgap. Wei Dai, paulfchristiano and I have been thinking about this issue for some time, with no results.
That does clarify what you originally meant. However, this still seems “rather suspicious”—due to the 1.0:
I’m willing to bite the bullet here because all hell breaks loose if I don’t. We don’t know how a Bayesian agent can ever function if it’s allowed (and therefore required) to doubt arbitrary mathematical statements, including statements about its own algorithm, current contents of memory, arithmetic, etc. It seems easier to just say 1.0 as a stopgap. Wei Dai, paulfchristiano and I have been thinking about this issue for some time, with no results.