It’s the program length that matters not it’s time or memory performance
That’s a common assumption, but does this really make sense?
It seems to me that if you have a program doing two things—simulating two people, say—then, if it has twice the overhead for the second one, the first should in some sense exist twice as much.
Okay, I wouldn’t normally do this, but.. what’s with the downvote? I honestly have no idea what the problem is, which makes avoiding it hard. Please explain.
“Use of the Speed Prior has the disadvantage of leading to less optimal predictions”
Unless I’m misunderstanding something, doesn’t this imply we’ve already figured out that that’s not the true prior? Which would be very interesting indeed.
I was going for “Matches what the universe is actually doing”, whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.
Sure, there’s no way of being sure we’ve figured out the correct rule; doesn’t mean there isn’t one.
I’m confused also. I think they may mean something like “empirically not the optimal prior we can use with a small amount of computation” but that doesn’t seem consistent with how it is being used.
“empirically not the optimal prior we can use with a small amount of computation”
I’m not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.
Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don’t think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.
You’re saying a Solomonoff Inductor would be outperformed by a variant that weighted quick programs more favorably, I think. (At the very least, it makes approximations computable.)
Whether or not penalizing for space/time cost increases the related complexity metric of the standard model is an interesting question, and there’s a good chance it’s a large penalty since simulating QM seems to require exponential time, but for starters I’m fine with just an estimate of the Kolmogorov Complexity.
Well, I’m saying the possibility is worth considering. I’m hardly going to claim certainty in this area.
As for QM...
The metric I think makes sense is, roughly, observer-moments divided by CPU time. Simulating QM takes exponential time, yes, but there’s an equivalent exponential increase in the number of observer-moments. So QM shouldn’t have a penalty vs. classical.
On the flip side this type of prior would heavily favor low-fidelity simulations, but I don’t know if that’s any kind of strike against it.
That’s a common assumption, but does this really make sense?
It seems to me that if you have a program doing two things—simulating two people, say—then, if it has twice the overhead for the second one, the first should in some sense exist twice as much.
The same argument could be applied to universes.
Okay, I wouldn’t normally do this, but.. what’s with the downvote? I honestly have no idea what the problem is, which makes avoiding it hard. Please explain.
https://en.wikipedia.org/wiki/Speed_prior / http://www.idsia.ch/~juergen/speedprior.html
“Use of the Speed Prior has the disadvantage of leading to less optimal predictions”
Unless I’m misunderstanding something, doesn’t this imply we’ve already figured out that that’s not the true prior? Which would be very interesting indeed.
Would someone mind explaining what “the true prior” is? Given that probability is in the mind, I don’t see how the concept makes sense.
I was going for “Matches what the universe is actually doing”, whether that means setting it equal to the apparent laws of physics or to something like a dovetailer.
Sure, there’s no way of being sure we’ve figured out the correct rule; doesn’t mean there isn’t one.
In other words it’s the prior that’s 1 on the actual universe and 0 on everything else.
Sure. A little hard to determine, fair enough.
I’m confused also. I think they may mean something like “empirically not the optimal prior we can use with a small amount of computation” but that doesn’t seem consistent with how it is being used.
I’m not even sure that makes sense since if this is based on empirical observations, presumable there was some prior prior that was updated based on those observations.
Well, they could be using a set of distinct priors (say 5 or 6 of them) and then noting over time which set required less major updating in general, but I don’t think this is what is going on either. We may need to just wait for Baughn to clarify what they meant.
As far as I know, any computable prior will have that disadvantage relative to full uncomputable SI.
You’re saying a Solomonoff Inductor would be outperformed by a variant that weighted quick programs more favorably, I think. (At the very least, it makes approximations computable.)
Whether or not penalizing for space/time cost increases the related complexity metric of the standard model is an interesting question, and there’s a good chance it’s a large penalty since simulating QM seems to require exponential time, but for starters I’m fine with just an estimate of the Kolmogorov Complexity.
Well, I’m saying the possibility is worth considering. I’m hardly going to claim certainty in this area.
As for QM...
The metric I think makes sense is, roughly, observer-moments divided by CPU time. Simulating QM takes exponential time, yes, but there’s an equivalent exponential increase in the number of observer-moments. So QM shouldn’t have a penalty vs. classical.
On the flip side this type of prior would heavily favor low-fidelity simulations, but I don’t know if that’s any kind of strike against it.