“Use of the Speed Prior has the disadvantage of leading to less optimal predictions”
Unless I’m misunderstanding something, doesn’t this imply we’ve already figured out that that’s not the true prior? Which would be very interesting indeed.
I was going for “Matches what the universe is actually doing”, whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.
Sure, there’s no way of being sure we’ve figured out the correct rule; doesn’t mean there isn’t one.
I’m confused also. I think they may mean something like “empirically not the optimal prior we can use with a small amount of computation” but that doesn’t seem consistent with how it is being used.
“empirically not the optimal prior we can use with a small amount of computation”
I’m not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.
Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don’t think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.
“Use of the Speed Prior has the disadvantage of leading to less optimal predictions”
Unless I’m misunderstanding something, doesn’t this imply we’ve already figured out that that’s not the true prior? Which would be very interesting indeed.
Would someone mind explaining what “the true prior” is? Given that probability is in the mind, I don’t see how the concept makes sense.
I was going for “Matches what the universe is actually doing”, whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.
Sure, there’s no way of being sure we’ve figured out the correct rule; doesn’t mean there isn’t one.
In other words it’s the prior that’s 1 on the actual universe and 0 on everything else.
Sure. A little hard to determine, fair enough.
I’m confused also. I think they may mean something like “empirically not the optimal prior we can use with a small amount of computation” but that doesn’t seem consistent with how it is being used.
I’m not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.
Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don’t think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.
As far as I know, any computable prior will have that disadvantage relative to full uncomputable SI.