given sufficient automation or AI (which has a lot of implications for the Fermi Paradox)
No. See, Katja Grace’s article here. AI can act as a Filter for doomsday type arguments but expansion in a lightcone is not a likely Filter from a Fermi standpoint since if the expansion is going at almost any rate that is not extremely close to lightspeed we’d still expect to have time to likely see it coming, and if one thinks speeds slower than that (like the 50% and 80% used in your article), this becomes even more severe.
I said it has a lot of implications—I didn’t say what they were! The implications are that it’s easy to cross galactic distances, so this makes the Fermi paradox much worse (or may imply AI is harder to reach).
Sentence rephrased to:
It seems to be surprisingly easy (which has a lot of implications for the Fermi Paradox), given sufficient automation or AI.
I suggest reordering that as “It seems to be surprisingly easy, given sufficient automation or AI (which has a lot of implications for the Fermi Paradox)”, which makes your point a bit clearer.
Which requires likely using substantially slower speeds and also requires that every single AI coming in our direction has made that same decision.
or has swept past Earth already and left a false-sky planetarium in its wake.
This seems extremely unlikely. It first requires an AI to want to care enough to deceive us at the cost of pretty high energy levels and it requires his AI to use an extremely complex deception. The most obvious deception (and most likely form if one had occurred any time other than very recently) would be to simply make the sky look empty of stars. Not only that, but this apparent false sky has uneccessary details which would be extremely hard to fake, such as neutrino bursts from supernova. Note also that if there is such a false-sky planetarium then all the data we are using to discuss the Great Filter becomes complete suspect anyhow (because the AI could have deliberately made cosmology look very different than it actually is), so this essentially should fall into the same category as any highly deceptive, nearly omnipotent being.
Based on the last link, I think he means that advanced civilizations will (almost always, almost completely) live very near black holes. It’s very unlikely we would notice that with current technology, if they make an effort not to be very obvious.
the probes would be very small, the deceleration signature virtually undetectable.
Sure, but if they start doing stuff on any large scale on another solar system, that should be noticeable. We don’t see any evidence that star’s energy is being harvested.
No. See, Katja Grace’s article here. AI can act as a Filter for doomsday type arguments but expansion in a lightcone is not a likely Filter from a Fermi standpoint since if the expansion is going at almost any rate that is not extremely close to lightspeed we’d still expect to have time to likely see it coming, and if one thinks speeds slower than that (like the 50% and 80% used in your article), this becomes even more severe.
I said it has a lot of implications—I didn’t say what they were! The implications are that it’s easy to cross galactic distances, so this makes the Fermi paradox much worse (or may imply AI is harder to reach).
Sentence rephrased to:
Ah, that makes much more sense. Thanks for clarifying.
I suggest reordering that as “It seems to be surprisingly easy, given sufficient automation or AI (which has a lot of implications for the Fermi Paradox)”, which makes your point a bit clearer.
No, that’s how I had it initially, and that’s what caused confusion! The ease of expansion is relevant to Fermi P., not the AI or automation.
Oh, right, sorry, I misread Joshua’s comment. I thought he didn’t notice that the ease of expansion is relevant given AI.
Unless it doesn’t want you to see it coming, or has swept past Earth already and left a false-sky planetarium in its wake.
Which requires likely using substantially slower speeds and also requires that every single AI coming in our direction has made that same decision.
This seems extremely unlikely. It first requires an AI to want to care enough to deceive us at the cost of pretty high energy levels and it requires his AI to use an extremely complex deception. The most obvious deception (and most likely form if one had occurred any time other than very recently) would be to simply make the sky look empty of stars. Not only that, but this apparent false sky has uneccessary details which would be extremely hard to fake, such as neutrino bursts from supernova. Note also that if there is such a false-sky planetarium then all the data we are using to discuss the Great Filter becomes complete suspect anyhow (because the AI could have deliberately made cosmology look very different than it actually is), so this essentially should fall into the same category as any highly deceptive, nearly omnipotent being.
(Penrose process, black holes as radiators.)
Can you explain the relevance? I’m not seeing it.
Based on the last link, I think he means that advanced civilizations will (almost always, almost completely) live very near black holes. It’s very unlikely we would notice that with current technology, if they make an effort not to be very obvious.
We would not expect to see it coming in my model—the probes would be very small, the deceleration signature virtually undetectable.
Sure, but if they start doing stuff on any large scale on another solar system, that should be noticeable. We don’t see any evidence that star’s energy is being harvested.