If you have the time I would be grateful for an intuitive explanation of why this is so. I don’t think the linked comment explains this because if we go on to colonize the universe our influence will be the same regardless of whether we are the first civilization to have reached our current (2014) level of development, or whether 1000s have done so but all fell.
“Do we live in a late filter universe?” is not a meaningful question. The meaningful question is “should we choose strategy A suitable for early filter universes or strategy B suitable for late filter universes?” According to UDT, we should choose the strategy leading to maximum expected utility given all similar players choose it, where the expectation value averages both kind of universes. Naive anthropic reasoning suggests we should assume we are in a late filter universe, since there are much more players there. This, however, is precisely offset by the fact these players have a poor chance of success even when playing B so their contribution to the difference in expected utility between A and B is smaller. Therefore, we should ignore anthropic reasoning and focus on the a priori probability of having an early filter versus a late filter.
The anthropic reasoning in there isn’t valid though. Anthropic reasoning can only be used to rule out impossibilities. If a universe were impossible, we wouldn’t be in it. However any inference beyond that makes assumptions about prior distirbutions and selection which have no justification. There are many papers (e.g. http://arxiv.org/abs/astro-ph/0610330) showing how anthropic reasoning is really anthropic rationalization when it comes to selecting one model over another.
Actually, it’s possible to always take anthropic considerations into account by using UDT + the Solomonoff prior. I think cosmologists would benefit from learning about it.
That’s an empty statement. It is always possible to take anthropic considerations into account by using [insert decision theory] + [insert prior]. Why did you choose that decision theory and more importantly that prior?
We have knowledge about only one universe. A single data point is insufficient to infer any information about universe selection priors.
If you have the time I would be grateful for an intuitive explanation of why this is so. I don’t think the linked comment explains this because if we go on to colonize the universe our influence will be the same regardless of whether we are the first civilization to have reached our current (2014) level of development, or whether 1000s have done so but all fell.
“Do we live in a late filter universe?” is not a meaningful question. The meaningful question is “should we choose strategy A suitable for early filter universes or strategy B suitable for late filter universes?” According to UDT, we should choose the strategy leading to maximum expected utility given all similar players choose it, where the expectation value averages both kind of universes. Naive anthropic reasoning suggests we should assume we are in a late filter universe, since there are much more players there. This, however, is precisely offset by the fact these players have a poor chance of success even when playing B so their contribution to the difference in expected utility between A and B is smaller. Therefore, we should ignore anthropic reasoning and focus on the a priori probability of having an early filter versus a late filter.
The anthropic reasoning in there isn’t valid though. Anthropic reasoning can only be used to rule out impossibilities. If a universe were impossible, we wouldn’t be in it. However any inference beyond that makes assumptions about prior distirbutions and selection which have no justification. There are many papers (e.g. http://arxiv.org/abs/astro-ph/0610330) showing how anthropic reasoning is really anthropic rationalization when it comes to selecting one model over another.
Actually, it’s possible to always take anthropic considerations into account by using UDT + the Solomonoff prior. I think cosmologists would benefit from learning about it.
That’s an empty statement. It is always possible to take anthropic considerations into account by using [insert decision theory] + [insert prior]. Why did you choose that decision theory and more importantly that prior?
We have knowledge about only one universe. A single data point is insufficient to infer any information about universe selection priors.
Thanks for the explanation!