“Cheating Death in Damascus” Solution to the Fermi Paradox
TL;DR: The Great Filter from the Fermi paradox could be escaped by choosing a random strategy. However, if all civilizations acted randomly, this could be the actual cause of the Fermi paradox. Using a meta-random strategy solves this.
----
“Death in Damascus” is a decision theory problem about attempting to escape an omniscient agent who is able to predict your behavior.
It goes like this: “You are currently in Damascus. Death knocks on your door and tells you I am coming for you tomorrow. You value your life at $1,000 and would like to escape death. You have the option of staying in Damascus or paying $1 to flee to Aleppo. If you and death are in the same city tomorrow, you die. Otherwise, you will survive. Although death tells you today that you will meet tomorrow, he made his prediction of whether you’ll stay or flee yesterday and must stick to his prediction no matter what. Unfortunately for you, Death is a perfect predictor of your actions. All of this information is known to you.”
It was explored in the article “Cheating death in Damascus,” where a possible solution was suggested: a true random generator is used to choose between staying in Damascus and fleeing to Aleppo, and thus one has 0.5 chance of survival.
The Fermi paradox is a type of “Death in Damascus” problem. The Fermi paradox said that other civilizations are not observable for unknown reasons, and one of the solutions is the Great Filter which kills all young civilizations; for us, such a filter is ahead. This means that all the civilizations before us made the same mistake which resulted in their demise, and as we are a typical civilization, we will make the same mistake, too. However, we don’t know what this universal mistake is. Maybe we should not experiment with hadron collider. Maybe AI always goes rogue, kills everybody and later self-terminates (AI itself can’t explain the Fermi paradox, as it will spread through the universe). But maybe the decision not to create AI is fatal, as only AI can manage risks of synthetic biology and other catastrophic risks.
In other words, whatever rational strategy we take, this is exactly what has killed all previous civilizations; if we escape to Aleppo, Death will meet us there. In the original problem, Death is omniscient; in the case of the Fermi paradox, the omniscience is replaced by our typicality and mediocrity reasoning: as we are typical, we will make all the same mistakes.
In the attempt to cheat Death, to escape the typical Great Filter, we (assuming here some form of global policy coordination is solved) could take random strategy in the future. For example, we could use a random generator to choose which technologies to develop and which to abandon. In that case, we have a chance not to develop one dangerous technology which is the universal killer.
But what if this random strategy is the filter itself? That is, what if abandoning some technologies will make our civilization impaired and contribute to the extinction? In that case, we could implement a meta-random strategy. At first, we use a random coin to choose: should we try the random-abandoning strategy at all, or go ahead without any “anthropic updates”?
Now let’s try to estimate the success probability of the random strategy. If this strategy were very effective—for example, if it would save 1 of 10 civilizations, while the total number of the civilizations who got to our level of sophistication in the observable universe is 100—we would still expect to observe 10 civilizations (and as other civilizations will observe each other, they will not implement the strategy, as there is no Fermi paradox for them). So, if the strategy were very effective, there would be no Fermi paradox, and no need for such a strategy. Thus, the strategy make sense if it gives survival probability only 1/N, where N is total number of civilizations in the past light cone which perished because of late Great Filter. In other words, if we expect that the past light cone had 100 civilizations, all of which met their demise, we should choose around 7 (as 27=128) random binary choices in our strategy, and we will, at best, have 0.01 chance of survival—which is better than 0, but is still very small (assuming that wrong random choice will destroy us).
Now, we could use the same logic not for escaping the Great Filter, but for explaining the observable Fermi Paradox. If almost all civilizations try random strategies, they mostly perished, exactly because they tried non-optimal behavior. Thus, the Fermi paradox becomes a self-fulfilling prophecy. Why would civilization agree on such a seemingly reckless gamble? Because this replaces the unknowable probability of survival with a small but fixed probability to survive. However, in the case of meta-randomness cheating, this is not an explanation to the Fermi paradox, as at least half of civilizations will not try any cheating at all.
Surely, this is an oversimplification, as it ignores other explanations of the Fermi paradox, like Rare earth, which are more favorable for our survival (but presumably less likely if we accept Grace’s version of the Doomsday argument).
- How to Survive the End of the Universe by 28 Nov 2019 12:40 UTC; 54 points) (EA Forum;
- What are some of bizarre theories based on anthropic reasoning? by 3 Feb 2019 18:48 UTC; 21 points) (
- Creating Environments to Design and Test Embedded Agents by 23 Aug 2019 3:17 UTC; 13 points) (
- Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse by 27 Sep 2018 10:09 UTC; 11 points) (
- 17 Jul 2018 15:09 UTC; 11 points) 's comment on Look Under the Light Post by (
- 16 Aug 2018 10:37 UTC; 3 points) 's comment on Request for input on multiverse-wide superrationality (MSR) by (
Thank you, that seems like a new and interesting idea. But I’m not sure randomness is the whole story.
Imagine you’re the second student taking an exam, and you know the first student was as smart as you and failed. The exam has one question: guess which fruit the examiner is thinking of. It seems likely that the first student said “apple”, so you shouldn’t just randomize—you should shift away from “apple”. Same if there were N students before you, you should shift away from the N most likely answers. Though if students are uncertain about how many students came before, that might make randomization more appealing. There’s probably a nice formula but I’m too lazy to work it out.
(Meta: this post produced surprisingly unequal reaction in different venues: it was immediately banned on one astronomical forum, but get high score at fermi related reddit group).
In case of the Fermi paradox, we just don’t know how many “students” were before us. If we have the best guess that there were N civilisations, we may try strategy with “strangeness” log(2)N, where strangeness is the number of non-optimal binary choices.
Yeah, no, I just thought about it some more and randomizing doesn’t make sense. If we’re certain about N and all previous students were also certain, the best strategy is to name the N+1st most likely fruit. And if we’re uncertain about N, there’s still some fruit that maximizes expected utility according to our uncertainty. The maximum can’t require a random mixture of fruit, because the function is linear, it’s maximized at the corners.
Edit: Whoops, I’m wrong and you’re right. If you have apriori equal probability of going first or second, and then are told that “if you’re second then the first failed”, UDT says you should indeed randomize. What a nice problem!
The difference of student’s example and Fermi paradox example is that each student knows his number, but civilizations can’t exchange information, so they don’t know each other order numbers in the game. If all civilizations would think that there were around 100 civilizations before them, they all will try the same strategy, and all will fail. That is why some randomization is needed in Fermi case—just to escape behavior paths of those civilizations, who had completely the same information as you.
But if we could use some external counter, like a time after the beginning of the universe, to choose between different strategies, this could help escape randomisation which is less optimal than the strategy of choosing best alternative after N.
Meta comment: I realised that there is a problem of replacing real world problem with toy problems which looks like almost the same—sometimes where is a subtle difference and solution of the toy problem will not scale back to the original problem. This is also true than we replace Doomsday argument with Sleeping Beauty problem.