If we are using Solomonoff induction, won’t the expected population be infinite?
Very crudely, finite worlds of size n will have probability about 2^-K(n) in the Solomonoff prior, where K is the Kolmogorov complexity of the binary representation of n. This works out at about 1/n x 1/log n x 1/log log n x… for most values of n, taking base 2 logarithms, and repeating the logs until we hit a small constant. The probability is higher for the simple “non-random” values of n like a googleplex or 3^^^3.
Then if the expected population in worlds of size n is proportional to n (again this is very crude), we get an expected population proportional to:
Sigma {n=1 to infinity} n x 2^-K(n)
which is at least
Sigma {n=1 to infinity} 1/log n x 1/log log n terms
and that is a divergent sum. So SIA predicts that for any size n, we are almost certainly in a world of size bigger than n and we can’t normalize the distribution! Problem.
There might be a better story with a less crude treatment, but frankly I’m doubtful. As I understand it, using SIA is equivalent (in your anthropic decision theory) to having an additive utility function which grows in proportion to the population size (or at least to the population of people whose decisions are linked to yours). And so the utility function is unbounded. And unbounded utility functions are a known problem with Solomonoff induction, at least according to Peter de Blanc (http://arxiv.org/abs/0712.4318). So I think the crude treatment is revealing a real problem here.
If we are using Solomonoff induction, won’t the expected population be infinite?…
There might be a better story with a less crude treatment, but frankly I’m doubtful.
Looking back at this, I’ve noticed there is a really simple proof that the expected population size is infinite under Solomonoff induction. Consider the “St Petersburg” hypothesis:
Sh == With probability 2^-n, the population size is 2^n …. for n = 1, 2, 3 etc.
This Sh is a well-defined, computable hypothesis, so under the Solomonoff prior it receives a non zero prior probability p > 0. This means that, under the Solomonoff prior we have:
Assuming the second term is >= 0 (for example, that no prior hypothesis gives a negative population size), this means that E[Population Size] >= p.E[Population Size| Sh].
But E[Population Size| Sh] is infinite, so under the Solomonoff prior, E[Population Size] is also infinite.
This shows that SIA is incompatible with Solomonoff induction, as it stands. The only way to achieve compatibility is to use an approximation to Solomonoff induction which rules out hypotheses like Sh e.g. by imposing a hard upper bound on population size. But what is the rational justification for that?
Wow, someone who’s read my paper! :-) it is because of considerations like the ones you mention that I’m tempted to require bounded utilities. Or unbounded utilities but only finitely many choices to be faced (which is equivalent with a bounded utility). It’s the combination—unbounded utility, unboundedly many options—that is the problem.
One approach is just to impose an arbitrary cut-off on all worlds above a certain large size (ignore everything bigger than 3^^^3 galaxies say), and then scale utility with population all the way up to the cut-off. That would give a bounded utility function, and an effect very like SIA. Most of your decisions would be weighted towards the assumption that you are living in one of the largest worlds, with size just below the cut-off. If you’d cut-off at 4^^^^4 galaxies, you’d assume you were in one of those worlds instead. However, since there don’t seem to be many decisions that are critically affected by whether we are one of 3^^^3 or one or 4^^^^4 galaxies, this probably works.
Another approach is to use a bounded utility function of more self-centered construction. Let’s suppose you care a lot about yourself and your family, a good measure about your friends and colleagues, a little bit (rather dilutely) about anyone else on Earth now, and rather vaguely about future generations of people. But not much at all about alien civilizations, future AIs etc. In that case your utility for a world of 3^^^3 alien civilizations is clearly not going to be much bigger than your utility for a world containing only the Earth, Sun and nearby planets (plus maybe a few nearby stars to admire at night). And so your decisions won’t be heavily weighted towards such big worlds. A betting coupon which cost a cent, and paid off a million dollars if your planet was the only inhabited one in the universe would look like a very good deal. This then looks more like SSA reasoning than SIA.
This last approach looks more consistent to me, and more in-line with the utility functions humans actually have, rather than the ones we might wish them to have.
Stuart, thanks for this.
If we are using Solomonoff induction, won’t the expected population be infinite?
Very crudely, finite worlds of size n will have probability about 2^-K(n) in the Solomonoff prior, where K is the Kolmogorov complexity of the binary representation of n. This works out at about 1/n x 1/log n x 1/log log n x… for most values of n, taking base 2 logarithms, and repeating the logs until we hit a small constant. The probability is higher for the simple “non-random” values of n like a googleplex or 3^^^3.
Then if the expected population in worlds of size n is proportional to n (again this is very crude), we get an expected population proportional to:
Sigma {n=1 to infinity} n x 2^-K(n)
which is at least
Sigma {n=1 to infinity} 1/log n x 1/log log n terms
and that is a divergent sum. So SIA predicts that for any size n, we are almost certainly in a world of size bigger than n and we can’t normalize the distribution! Problem.
There might be a better story with a less crude treatment, but frankly I’m doubtful. As I understand it, using SIA is equivalent (in your anthropic decision theory) to having an additive utility function which grows in proportion to the population size (or at least to the population of people whose decisions are linked to yours). And so the utility function is unbounded. And unbounded utility functions are a known problem with Solomonoff induction, at least according to Peter de Blanc (http://arxiv.org/abs/0712.4318). So I think the crude treatment is revealing a real problem here.
Looking back at this, I’ve noticed there is a really simple proof that the expected population size is infinite under Solomonoff induction. Consider the “St Petersburg” hypothesis:
Sh == With probability 2^-n, the population size is 2^n …. for n = 1, 2, 3 etc.
This Sh is a well-defined, computable hypothesis, so under the Solomonoff prior it receives a non zero prior probability p > 0. This means that, under the Solomonoff prior we have:
E[Population Size] = p.E[Population Size| Sh] + (1-p).E[Population Size| ~Sh]
Assuming the second term is >= 0 (for example, that no prior hypothesis gives a negative population size), this means that E[Population Size] >= p.E[Population Size| Sh].
But E[Population Size| Sh] is infinite, so under the Solomonoff prior, E[Population Size] is also infinite.
This shows that SIA is incompatible with Solomonoff induction, as it stands. The only way to achieve compatibility is to use an approximation to Solomonoff induction which rules out hypotheses like Sh e.g. by imposing a hard upper bound on population size. But what is the rational justification for that?
Wow, someone who’s read my paper! :-) it is because of considerations like the ones you mention that I’m tempted to require bounded utilities. Or unbounded utilities but only finitely many choices to be faced (which is equivalent with a bounded utility). It’s the combination—unbounded utility, unboundedly many options—that is the problem.
I’m interested in how you’d apply the bound.
One approach is just to impose an arbitrary cut-off on all worlds above a certain large size (ignore everything bigger than 3^^^3 galaxies say), and then scale utility with population all the way up to the cut-off. That would give a bounded utility function, and an effect very like SIA. Most of your decisions would be weighted towards the assumption that you are living in one of the largest worlds, with size just below the cut-off. If you’d cut-off at 4^^^^4 galaxies, you’d assume you were in one of those worlds instead. However, since there don’t seem to be many decisions that are critically affected by whether we are one of 3^^^3 or one or 4^^^^4 galaxies, this probably works.
Another approach is to use a bounded utility function of more self-centered construction. Let’s suppose you care a lot about yourself and your family, a good measure about your friends and colleagues, a little bit (rather dilutely) about anyone else on Earth now, and rather vaguely about future generations of people. But not much at all about alien civilizations, future AIs etc. In that case your utility for a world of 3^^^3 alien civilizations is clearly not going to be much bigger than your utility for a world containing only the Earth, Sun and nearby planets (plus maybe a few nearby stars to admire at night). And so your decisions won’t be heavily weighted towards such big worlds. A betting coupon which cost a cent, and paid off a million dollars if your planet was the only inhabited one in the universe would look like a very good deal. This then looks more like SSA reasoning than SIA.
This last approach looks more consistent to me, and more in-line with the utility functions humans actually have, rather than the ones we might wish them to have.