Now, the METI application: if this scenario is true, then sending messages so that the expanding civilization notices us might be risky if they can quieten down and silently englobe or surprise us. (Surprise is likely more effective than englobement, since spamming the sky with quiet relativistic probes is hard to stop)
When does this matter? If we happen to be far away from the civilization, then they will notice the message late and we could have done all sorts of things in the meantime—escape, becoming an equivalently powerful civilization, gone extinct etc. We would have done the same even if they had not been there. So there is no change.
If they are already here there is only an effect if they react to the attempt at messaging (“Only talk to civs that want to talk to you”/”Only wipe out civs that might pollute the ether with deliberate messages”), so there is no change.
[ In fact, a civilization that deliberately tries to conceal itself may be particularly concerned with signalling civilizations since they might act as beacons: if they get cut off, that might tell other listeners where something is going on. Keeping them signalling even when well inside their englobement might be good camouflage. Until they are discreetly replaced with decoy young civilizations...]
So the only case that leads to nontrivial effects is the nearby but not here case. Friendly civilizations react by doing something friendly, but that would have happened anyway when they came here. Unfriendly civilizations use the information to be more efficiently unfriendly (like quiet englobement). The main thing that changes is that the opportunity for running away (or other reactive responses) decreases.
So the overall utility change is U(METI)-U(no METI) = -Pr[nearby but not here] * U(running away).
Now, Pr[nearby but not here] seems to be small for rapidly expanding civilizations. If they spread at speed v, then it is (v/c)^3. For v=0.9c, it is only 27.5%, and for 0.99c it is 2.97%. So it is a small net negative, assuming U(running away) is positive.
All of the above is conditioned on that interstellar/intergalactic expansion is doable. If it isn’t, we get informational game theory instead.
S. Jay Olson’s work on expanding civilizations is very relevant here, e.g. https://arxiv.org/abs/1608.07522 and https://arxiv.org/abs/1512.01521 That work suggests that even non-hidden civilizations will be fairly close to their light front.
Now, the METI application: if this scenario is true, then sending messages so that the expanding civilization notices us might be risky if they can quieten down and silently englobe or surprise us. (Surprise is likely more effective than englobement, since spamming the sky with quiet relativistic probes is hard to stop)
When does this matter? If we happen to be far away from the civilization, then they will notice the message late and we could have done all sorts of things in the meantime—escape, becoming an equivalently powerful civilization, gone extinct etc. We would have done the same even if they had not been there. So there is no change.
If they are already here there is only an effect if they react to the attempt at messaging (“Only talk to civs that want to talk to you”/”Only wipe out civs that might pollute the ether with deliberate messages”), so there is no change.
[ In fact, a civilization that deliberately tries to conceal itself may be particularly concerned with signalling civilizations since they might act as beacons: if they get cut off, that might tell other listeners where something is going on. Keeping them signalling even when well inside their englobement might be good camouflage. Until they are discreetly replaced with decoy young civilizations...]
So the only case that leads to nontrivial effects is the nearby but not here case. Friendly civilizations react by doing something friendly, but that would have happened anyway when they came here. Unfriendly civilizations use the information to be more efficiently unfriendly (like quiet englobement). The main thing that changes is that the opportunity for running away (or other reactive responses) decreases.
So the overall utility change is U(METI)-U(no METI) = -Pr[nearby but not here] * U(running away).
Now, Pr[nearby but not here] seems to be small for rapidly expanding civilizations. If they spread at speed v, then it is (v/c)^3. For v=0.9c, it is only 27.5%, and for 0.99c it is 2.97%. So it is a small net negative, assuming U(running away) is positive.
All of the above is conditioned on that interstellar/intergalactic expansion is doable. If it isn’t, we get informational game theory instead.