For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses. You allocate slices by length, you get that length based prior, you allocate slices by runtime and length, you get the speed prior.
Ideally you’d want to quantify all symmetries in the evidence and somehow utilize those, so that you immediately get prior of 1⁄6 for a side of symmetric die when you can’t make predictions. But the theory-length prior doesn’t do that either.
The other strategy is to change your utility function, but that doesn’t seem right either. Should I care less about 3^^^3 people just because it’s a situation that might actually come up?
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
What would you suggest to someone who had a different utility function, where you run into this problem when using the speed prior?
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
Do you think there is a universe outside of our past light cone? It would increase the program length to limit it to that, but not nearly as much as it would decrease the run time.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
There isn’t single “solomonoff induction”, choice of the machine is arbitrary and for some machines the simplest way to encode our universe is through some form of god (the creator/owner of a simulation, if you wish). In any case the prior for universe with god is not that much smaller than prior for universe without, because you can obtain a sentient being simply by picking data out of any universe where such evolves. Note that these models with some god work just fine, and no, even though I am an atheist, I don’t see what’s the big deal.
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
The second source of problems is attribution of reality to internals of the prediction method. I don’t sure it is valid for either prior. Laws of the universe are most concisely expressed as properties which hold everywhere rather than as calculation rules of some kind; the rules are derived as alternate structures that share same properties.
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses. You allocate slices by length, you get that length based prior, you allocate slices by runtime and length, you get the speed prior.
Ideally you’d want to quantify all symmetries in the evidence and somehow utilize those, so that you immediately get prior of 1⁄6 for a side of symmetric die when you can’t make predictions. But the theory-length prior doesn’t do that either.
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
What would you suggest to someone who had a different utility function, where you run into this problem when using the speed prior?
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
Do you think there is a universe outside of our past light cone? It would increase the program length to limit it to that, but not nearly as much as it would decrease the run time.
There isn’t single “solomonoff induction”, choice of the machine is arbitrary and for some machines the simplest way to encode our universe is through some form of god (the creator/owner of a simulation, if you wish). In any case the prior for universe with god is not that much smaller than prior for universe without, because you can obtain a sentient being simply by picking data out of any universe where such evolves. Note that these models with some god work just fine, and no, even though I am an atheist, I don’t see what’s the big deal.
The second source of problems is attribution of reality to internals of the prediction method. I don’t sure it is valid for either prior. Laws of the universe are most concisely expressed as properties which hold everywhere rather than as calculation rules of some kind; the rules are derived as alternate structures that share same properties.