The problem is, is that really a good enough reason to use different priors?
Sum not converging is reason enough; its not that there’s potential “pascal’s mugging” problem, it’s that the utility is undefined entirely.
Consider the similar situation where someone rejects the 2^-(theory length) priors on the basis that it would say God doesn’t exist, and they don’t want to deal with that.
That prior doesn’t say God doesn’t exist; some very incompetent people who explain said prior tell that it does, but the fact is that we do not know and will never know. At most, Gods are not much longer to encode than universes where intelligent life evolves, anyway (hence the Gods in form of superintelligences, owners of our simulation and so on).
Are you saying you can get around it just by using better math, instead of messing with priors?
What do you mean? The “bad math” is this idea that utility is even well defined given a dubious prior where it is not well defined. It’s not like humans use theory-length prior, anyway.
What you can do is use “speed prior” or variation thereof. It discounts for size of universe (-ish), making the sum converge.
Note that it still leaves any practical agent with a potential problem, in that arguments by potentially hostile parties may bias it’s approximations of the utility, bu providing speculations which involve large, but not physically impossible under known laws of physics, utilities, which are highly speculative and thus the approximate utility calculations do not equally adjust both sides of the utility comparisons.
Sum not converging is reason enough; its not that there’s potential “pascal’s mugging” problem, it’s that the utility is undefined entirely.
For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?
The other strategy is to change your utility function, but that doesn’t seem right either. Should I care less about 3^^^3 people just because it’s a situation that might actually come up?
For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses. You allocate slices by length, you get that length based prior, you allocate slices by runtime and length, you get the speed prior.
Ideally you’d want to quantify all symmetries in the evidence and somehow utilize those, so that you immediately get prior of 1⁄6 for a side of symmetric die when you can’t make predictions. But the theory-length prior doesn’t do that either.
The other strategy is to change your utility function, but that doesn’t seem right either. Should I care less about 3^^^3 people just because it’s a situation that might actually come up?
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
What would you suggest to someone who had a different utility function, where you run into this problem when using the speed prior?
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
Do you think there is a universe outside of our past light cone? It would increase the program length to limit it to that, but not nearly as much as it would decrease the run time.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
There isn’t single “solomonoff induction”, choice of the machine is arbitrary and for some machines the simplest way to encode our universe is through some form of god (the creator/owner of a simulation, if you wish). In any case the prior for universe with god is not that much smaller than prior for universe without, because you can obtain a sentient being simply by picking data out of any universe where such evolves. Note that these models with some god work just fine, and no, even though I am an atheist, I don’t see what’s the big deal.
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
The second source of problems is attribution of reality to internals of the prediction method. I don’t sure it is valid for either prior. Laws of the universe are most concisely expressed as properties which hold everywhere rather than as calculation rules of some kind; the rules are derived as alternate structures that share same properties.
Sum not converging is reason enough; its not that there’s potential “pascal’s mugging” problem, it’s that the utility is undefined entirely.
That prior doesn’t say God doesn’t exist; some very incompetent people who explain said prior tell that it does, but the fact is that we do not know and will never know. At most, Gods are not much longer to encode than universes where intelligent life evolves, anyway (hence the Gods in form of superintelligences, owners of our simulation and so on).
What do you mean? The “bad math” is this idea that utility is even well defined given a dubious prior where it is not well defined. It’s not like humans use theory-length prior, anyway.
What you can do is use “speed prior” or variation thereof. It discounts for size of universe (-ish), making the sum converge.
Note that it still leaves any practical agent with a potential problem, in that arguments by potentially hostile parties may bias it’s approximations of the utility, bu providing speculations which involve large, but not physically impossible under known laws of physics, utilities, which are highly speculative and thus the approximate utility calculations do not equally adjust both sides of the utility comparisons.
For any prior with infinitely many possibilities, you can come up with some non-converging utility function. Does that mean we can change how likely things are by changing what we want?
The other strategy is to change your utility function, but that doesn’t seem right either. Should I care less about 3^^^3 people just because it’s a situation that might actually come up?
Prior is not how likely things are. It’s just a way to slice the probability of 1 among the competing hypotheses. You allocate slices by length, you get that length based prior, you allocate slices by runtime and length, you get the speed prior.
Ideally you’d want to quantify all symmetries in the evidence and somehow utilize those, so that you immediately get prior of 1⁄6 for a side of symmetric die when you can’t make predictions. But the theory-length prior doesn’t do that either.
It seems to me that such situation really should get unlikely faster than 2^-length gets small.
And I could allocate it so that there is almost certainly a god, or even so there is certainly a god. That wouldn’t be a good idea though, would it?
What would you suggest to someone who had a different utility function, where you run into this problem when using the speed prior?
Also, the speed prior looks bad. It predicts the universe should be small and short-lived. This is not what we have observed.
Do you think there is a universe outside of our past light cone? It would increase the program length to limit it to that, but not nearly as much as it would decrease the run time.
There isn’t single “solomonoff induction”, choice of the machine is arbitrary and for some machines the simplest way to encode our universe is through some form of god (the creator/owner of a simulation, if you wish). In any case the prior for universe with god is not that much smaller than prior for universe without, because you can obtain a sentient being simply by picking data out of any universe where such evolves. Note that these models with some god work just fine, and no, even though I am an atheist, I don’t see what’s the big deal.
The second source of problems is attribution of reality to internals of the prediction method. I don’t sure it is valid for either prior. Laws of the universe are most concisely expressed as properties which hold everywhere rather than as calculation rules of some kind; the rules are derived as alternate structures that share same properties.