(Meta: this post produced surprisingly unequal reaction in different venues: it was immediately banned on one astronomical forum, but get high score at fermi related reddit group).
In case of the Fermi paradox, we just don’t know how many “students” were before us. If we have the best guess that there were N civilisations, we may try strategy with “strangeness” log(2)N, where strangeness is the number of non-optimal binary choices.
Yeah, no, I just thought about it some more and randomizing doesn’t make sense. If we’re certain about N and all previous students were also certain, the best strategy is to name the N+1st most likely fruit. And if we’re uncertain about N, there’s still some fruit that maximizes expected utility according to our uncertainty. The maximum can’t require a random mixture of fruit, because the function is linear, it’s maximized at the corners.
Edit: Whoops, I’m wrong and you’re right. If you have apriori equal probability of going first or second, and then are told that “if you’re second then the first failed”, UDT says you should indeed randomize. What a nice problem!
The difference of student’s example and Fermi paradox example is that each student knows his number, but civilizations can’t exchange information, so they don’t know each other order numbers in the game. If all civilizations would think that there were around 100 civilizations before them, they all will try the same strategy, and all will fail. That is why some randomization is needed in Fermi case—just to escape behavior paths of those civilizations, who had completely the same information as you.
But if we could use some external counter, like a time after the beginning of the universe, to choose between different strategies, this could help escape randomisation which is less optimal than the strategy of choosing best alternative after N.
Meta comment: I realised that there is a problem of replacing real world problem with toy problems which looks like almost the same—sometimes where is a subtle difference and solution of the toy problem will not scale back to the original problem. This is also true than we replace Doomsday argument with Sleeping Beauty problem.
(Meta: this post produced surprisingly unequal reaction in different venues: it was immediately banned on one astronomical forum, but get high score at fermi related reddit group).
In case of the Fermi paradox, we just don’t know how many “students” were before us. If we have the best guess that there were N civilisations, we may try strategy with “strangeness” log(2)N, where strangeness is the number of non-optimal binary choices.
Yeah, no, I just thought about it some more and randomizing doesn’t make sense. If we’re certain about N and all previous students were also certain, the best strategy is to name the N+1st most likely fruit. And if we’re uncertain about N, there’s still some fruit that maximizes expected utility according to our uncertainty. The maximum can’t require a random mixture of fruit, because the function is linear, it’s maximized at the corners.
Edit: Whoops, I’m wrong and you’re right. If you have apriori equal probability of going first or second, and then are told that “if you’re second then the first failed”, UDT says you should indeed randomize. What a nice problem!
The difference of student’s example and Fermi paradox example is that each student knows his number, but civilizations can’t exchange information, so they don’t know each other order numbers in the game. If all civilizations would think that there were around 100 civilizations before them, they all will try the same strategy, and all will fail. That is why some randomization is needed in Fermi case—just to escape behavior paths of those civilizations, who had completely the same information as you.
But if we could use some external counter, like a time after the beginning of the universe, to choose between different strategies, this could help escape randomisation which is less optimal than the strategy of choosing best alternative after N.
Meta comment: I realised that there is a problem of replacing real world problem with toy problems which looks like almost the same—sometimes where is a subtle difference and solution of the toy problem will not scale back to the original problem. This is also true than we replace Doomsday argument with Sleeping Beauty problem.