I think you are overestimating the observability of our civilization. https://what-if.xkcd.com/47/ has an analysis of it. A galactic civ with huge telescopes could easily spot us and a Dyson sphere would be easily visible to us. However our telescope sensitivity and signal strength mean we might detect aliens a few light years away, if we were pointing a telescope at them when they were broadcasting. However its possible that we will go through a highly observable stage before our light speed expansion, sitting round a dyson sphere for 100′s of years, so this analysis is still useful.
Yeah, that’s a good point. I will amend that part at some point.
Also, the analysis might have some predictions if civilisations don’t pass through a (long) observable stage before they start to expand. It increases the probability that a shockwave of intergalactic expansion will arrive at Earth soon. Still, if the region of our past light cone where young civilisations might exist is small enough, we probably just lose information on where the filter is likely to be
If shock wave is anything below с, something like 0.9c, when we could observer the incoming shockwave, and also, because of t^4 volume rule, the chances that we are in the outer volume of the cone where we could observer the incoming shock wave are larger and are 0.35 for 0.9c.
I think that the shock originators know all this and try to send information signals ahead of physical starships, in what I call SETI-attack.
Yes, I suppose the only way that this would not be an issue is if the aliens are travelling at a very high fraction of the speed of light and inflation means that they will never reach spatially distant parts of the Universe in time for this to be an issue.
In SETI-attack, is the idea that the information signals are disruptive and cause the civilisations they may annihilate to be too disrupted (perhaps by war or devastating technological failures) to defend themselves?
The idea is that aliens purposely send dangerous AI-code aimed on self-replication and transmitting the code farther. There are a lot of technical details how it could happen, which I described in the recently published article, available here: https://philpapers.org/rec/TURTRC
I think you are overestimating the observability of our civilization. https://what-if.xkcd.com/47/ has an analysis of it. A galactic civ with huge telescopes could easily spot us and a Dyson sphere would be easily visible to us. However our telescope sensitivity and signal strength mean we might detect aliens a few light years away, if we were pointing a telescope at them when they were broadcasting. However its possible that we will go through a highly observable stage before our light speed expansion, sitting round a dyson sphere for 100′s of years, so this analysis is still useful.
Yeah, that’s a good point. I will amend that part at some point.
Also, the analysis might have some predictions if civilisations don’t pass through a (long) observable stage before they start to expand. It increases the probability that a shockwave of intergalactic expansion will arrive at Earth soon. Still, if the region of our past light cone where young civilisations might exist is small enough, we probably just lose information on where the filter is likely to be
If shock wave is anything below с, something like 0.9c, when we could observer the incoming shockwave, and also, because of t^4 volume rule, the chances that we are in the outer volume of the cone where we could observer the incoming shock wave are larger and are 0.35 for 0.9c.
I think that the shock originators know all this and try to send information signals ahead of physical starships, in what I call SETI-attack.
Yes, I suppose the only way that this would not be an issue is if the aliens are travelling at a very high fraction of the speed of light and inflation means that they will never reach spatially distant parts of the Universe in time for this to be an issue.
In SETI-attack, is the idea that the information signals are disruptive and cause the civilisations they may annihilate to be too disrupted (perhaps by war or devastating technological failures) to defend themselves?
The idea is that aliens purposely send dangerous AI-code aimed on self-replication and transmitting the code farther. There are a lot of technical details how it could happen, which I described in the recently published article, available here: https://philpapers.org/rec/TURTRC