Linked decisions an a “nice” solution for the Fermi paradox
One of the more speculative solutions of the Fermi paradox is that all civilizations decide to stay home, thereby meta-cause other civilizations to stay home too, and thus allow the Fermi paradox to have a nice solution. (I remember reading this idea in Paul Almond’s writings about evidential decision theory, which unfortunately seem no longer available online.) The plausibility of this argument is definitely questionable. It requires a very high degree of goal convergence both within and among different civilizations. Let us grant this convergence and assume that, indeed, most civilizations arrive at the same decision and that they make their decision knowing this. One paradoxical implication then is: If a civilization decides to attempt space colonization, they are virtually guaranteed to face unexpected difficulties (for otherwise space would already be colonized, unless they are the first civilization in their neighborhood attempting space colonization). If, on the other hand, everyone decides to stay home, there is no reason for thinking that there would be any unexpected difficulties if one tried. Space colonization can either be easy, or you can try it, but not both.
Can the basic idea behind the argument be formalized? Consider the following game: There are N>>1 players. Each player is offered to push a button in turn. Pushing the button yields a reward R>0 with probability p and a punishment P<0 otherwise. (R corresponds to successful space colonization while P corresponds to a failed colonization attempt.) Not pushing the button gives zero utility. If a player pushes the button and receives R, the game is immediately aborted, while the game continues if a player receives P. Players do not know how many other players were offered to push the button before them, they only know that no player before them received R. Players also don’t know p. Instead, they have a probability distribution u(p) over possible values of p. (u(p)>=0 and the integral of u(p) from 0 to 1 is given by int_{0}^{1}u(p)dp=1.) We also assume that the decisions of the different players are perfectly linked.
Naively, it seems that players simply have an effective success probability p_eff,1=int_{0}^{1}p*u(p)dp and they should push the button iff p_eff,1*R+(1-p_eff,1)*P>0. Indeed, if players decide not to push the button they should expect that pushing the button would have given them R with probability p_eff,1. The situation becomes more complicated if a player decides to push the button. If a player pushes the button, they know that all players before them have also pushed the button and have received P. Before taking this knowledge into account, players are completely ignorant about the number i of players who were offered to push the button before them, and have to assign each number i from 0 to N-1 the same probability 1/N. Taking into account that all players before them have received P, the variables i and p become correlated: the larger i, the higher the probability of a small value of p. Formally, the joint probability distribution w(i,p) for the two variables is, according to Bayes’ theorem, given by w(i,p)=c*u(p)*(1-p)^i, where c is a normalization constant. The marginal distribution w(p) is given by w(p)=sum_{i=0}^{N-1}w(i,p). Using N>>1, we find w(p)=c*u(p)/p. The normalization constant is thus c=[int_{0}^{1}u(p)/p*dp]^{-1}. Finally, we find that the effective success probability taking the linkage of decisions into account is given by
p_eff,2 = int_{0}^{1}p*w(p)dp = c = [int_{0}^{1}u(p)/p*dp]^{-1} .
This is the expected chance of success if players decide to push the button. Players should push the button iff p_eff,2*R+(1-p_eff,2)*P>0. If follows from convexity of the function x->1/x (for positive x) that p_eff,2<=p_eff,1. So by deciding to push the button, players decrease their expected success probability from p_eff,1 to p_eff,2; they cannot both push the button and have the unaltered success probability p_eff,1. Linked decisions can explain why no one pushes the button if p_eff,2*R+(1-p_eff,2)*P<0, even though we might have p_eff,1*R+(1-p_eff,1)*P>0 and pushing the button naively seems to have positive expected utility.
It is also worth noting that if u(0)>0, the integral int_{0}^{1}u(p)/p*dp diverges such that we have p_eff,2=0. This means that given perfectly linked decisions and a sufficiently large number of players N>>1, players should never push the button if their distribution u(p) satisfies u(0)>0, irrespective of the ratio of R and P. This is due to an observer selection effect: If a player decides to push the button, then the fact that they are even offered to push the button is most likely due to p being very small and thus a lot of players being offered to push the button.
I think you have a typo—it should be “if a player receives P”.
Thanks, I fixed it.
If I am choosing the algorithm that all civilisations are going to follow, if one civilisation succeeded that would lead to large positive utilities for all future civilisations. Why would I let the game end?
Not sure I understand your question, but:
I assume that each civilization only cares about itself. So one civilization succeeding does not “lead to large positive utilities for all future civilisations”, only for itself. If civilization A assigns positive or negative value to civilization B succeeding, the expected utility calculations become more complicated.
You cannot “let the game end”. The fact that the game ends when one player receives R only represents the fact that each player knows that no previous player has received R (i.e., we arguably know that no civilization so far has successfully colonized space in our neighborhood).
Wouldn’t it be more accurate to state that R represents an enduring multi-system technological civilization and not mere colonial presence?
I don’t think we can arguably claim that space in our stellar neighborhood has never been colonized, just that it does not appear to be currently
The Copernican Principle seems to imply that all observers see the same kind of universe, namely, an uninhabited one apart from the observers’ home planet. The SETI cultists’ refusal to accept this observation and their increasingly convoluted conjectures for why we can’t detect ET’s show that the quest has become irrational by scientific skeptics’ own standards.
After some googling, their conjectures seem to be “well, we’ve only checked a few thousand stars, and only in the radio region of the spectrum. Our best highly speculative guess has detectable civilizations originating on one in every few million stars, so either civilizations are expending to less than 1000 stars on average, or they’re not using radio waves, or our guesses about how common they are are wrong.”
Absent FTL communication, it is hard to imagine a scenario in which any central control remains after civilization has spread to more than a few stars. There would be no stopping the expansion after that, so the first explanation is unlikely.
A civilization whose area of expansion includes our own solar system would be perceivable by many means other than radio, so the second explanation is really not relevant.
That leaves the third as the most likely explanation, I am afraid.
Each expansion part is led by an AI with a shared utility function, and a specified way of resolving negotiations.
I don’t see how this amounts to central control. At best it is parallel predetermination, but that breaks down because the actions of the AI are determined by the environment, not the utility function alone. Central control implies two-way communication and is impractical when the latency is measured in decades.
What does this second sentence mean? If everyone sees that the universe is uninhabitated this makes the Great Filter more of a problem, not less of one, and it needs a resolution. That you don’t like some of the explanations doesn’t make the attempt to understand what is going on irrational.