Another possible way for AI X-risk to be linked to the Fermi Paradox is if the ecosystems of superintelligent AIs tend to handle their own existential risk issues badly and tend to damage the fabric of reality badly enough to destroy themself and everything in their neighborhoods in the process.
For example, if one wants to discover FTL, then one probably needs to develop radical new physics and to perform radical novel physical experiments, and it might be the case that our reality is “locally fragile” to this kind of experiments and that an experiment like that would often bring an “end of the world in the local neighborhood”.
Or a whole class of equivalent scenarios. It is possible the universe is cheating somehow and modeling large complex objects like stars not as individual subatomic processes but as some entangled group where the universe calculates the behavior of the star in bulk. The outcome we can observe would be the same.
A Singularity of any form fills the space around the star with the most complex, densest technology that can be devised, and it cannot be modeled in any way but calculating every single interaction.
In a game this will fail and the error handling will clear the excess entities or delete the bad region of the map.
Yes, if one is in a simulation, the Fermi Paradox is easy, and there likely to be some fuses against excessive computational demands, one way or another (although, this is mostly a memory problem, otherwise it’s also solvable by the simulation inner time being slowed with respect to the external time; this way external observers would see a slowdown of the simulation progression, and if the counterfactual of being able to “look outside, to view some of the enveloping simulation” were correct, that outside thing would be speeding up)...
I thought about it and realized that it is still unsatisfactory. Imagine that solar systems do get reset but sometimes only after a starship has departed. The beings on the departing ship would figure out something happened and eventually discover the cause with experiments, and would then proceed to conquer the universe, avoiding overcrowding any 1 system.
This “at least 1 successful replicator” weakens most arguments to solve the paradox.
ASI is a great replicator and fails to really explain anything. Sure maybe on earth there might be a nuclear war to try to stop the ASI, and maybe in some timelines the ASI is defeated and humans die also. But this has to be the outcome everywhere in the universe or again we should see a sky crowded with Dyson swarms..
I’ll note that most of the theorised catastrophes in that vein look like either “planet gets ice-nined”, “local star goes nova”, or “blast wave propagates at lightspeed forever”. The first two of those are relatively-easy to work around for an intelligent singleton, and the last doesn’t explain the Fermi observation since any instance of that in our past lightcone would have destroyed Earth.
My mental model of this class of disasters is different and assumes a much higher potential for discovery of completely novel physics.
I tend to assume that speaking in terms of ratio of today’s physics knowledge to physics knowledge 500 years ago, there is still potential for a comparable jump.
So I tend to think in terms of either warfare with weapons involving short-term reversible changes of fundamental physical constants and/or Planck-scale-structure of space-time or careless experiments of this kind, resulting in both cases in a total destruction of local neighborhood.
In this sense, a singleton does indeed have better chances compared to multipolar scenarios, both in terms of much smaller potential for “warfare” and in terms of having much, much easier time to coordinate risks of “civilian activities”.
However, I am not sure whether the notion of singleton is well-defined; a system can look like a singleton from the outside and behave like a singleton most of the time, but it still needs to have plenty of non-trivial structure inside and is still likely to be a “Society of Mind” (just like most humans look like singular entities from the outside, but have plenty of non-trivial structure inside themselves and are “Societies of Mind”).
To compare, even the most totalitarian states (our imperfect approximations of singletons) have plenty of fractional warfare, and powerful fractions destroy each other all the time. So far those fractions have not used military weapons of mass destruction in those struggles, but this is mostly because those weapons have been relatively unwieldy.
And even without those considerations, experiments in search of new physics are tempting, and balancing risks and rewards of such experiments can easily go wrong even for a “true singleton”.
Another possible way for AI X-risk to be linked to the Fermi Paradox is if the ecosystems of superintelligent AIs tend to handle their own existential risk issues badly and tend to damage the fabric of reality badly enough to destroy themself and everything in their neighborhoods in the process.
For example, if one wants to discover FTL, then one probably needs to develop radical new physics and to perform radical novel physical experiments, and it might be the case that our reality is “locally fragile” to this kind of experiments and that an experiment like that would often bring an “end of the world in the local neighborhood”.
Or a whole class of equivalent scenarios. It is possible the universe is cheating somehow and modeling large complex objects like stars not as individual subatomic processes but as some entangled group where the universe calculates the behavior of the star in bulk. The outcome we can observe would be the same.
A Singularity of any form fills the space around the star with the most complex, densest technology that can be devised, and it cannot be modeled in any way but calculating every single interaction.
In a game this will fail and the error handling will clear the excess entities or delete the bad region of the map.
Yes, if one is in a simulation, the Fermi Paradox is easy, and there likely to be some fuses against excessive computational demands, one way or another (although, this is mostly a memory problem, otherwise it’s also solvable by the simulation inner time being slowed with respect to the external time; this way external observers would see a slowdown of the simulation progression, and if the counterfactual of being able to “look outside, to view some of the enveloping simulation” were correct, that outside thing would be speeding up)...
I thought about it and realized that it is still unsatisfactory. Imagine that solar systems do get reset but sometimes only after a starship has departed. The beings on the departing ship would figure out something happened and eventually discover the cause with experiments, and would then proceed to conquer the universe, avoiding overcrowding any 1 system.
This “at least 1 successful replicator” weakens most arguments to solve the paradox.
ASI is a great replicator and fails to really explain anything. Sure maybe on earth there might be a nuclear war to try to stop the ASI, and maybe in some timelines the ASI is defeated and humans die also. But this has to be the outcome everywhere in the universe or again we should see a sky crowded with Dyson swarms..
I’ll note that most of the theorised catastrophes in that vein look like either “planet gets ice-nined”, “local star goes nova”, or “blast wave propagates at lightspeed forever”. The first two of those are relatively-easy to work around for an intelligent singleton, and the last doesn’t explain the Fermi observation since any instance of that in our past lightcone would have destroyed Earth.
My mental model of this class of disasters is different and assumes a much higher potential for discovery of completely novel physics.
I tend to assume that speaking in terms of ratio of today’s physics knowledge to physics knowledge 500 years ago, there is still potential for a comparable jump.
So I tend to think in terms of either warfare with weapons involving short-term reversible changes of fundamental physical constants and/or Planck-scale-structure of space-time or careless experiments of this kind, resulting in both cases in a total destruction of local neighborhood.
In this sense, a singleton does indeed have better chances compared to multipolar scenarios, both in terms of much smaller potential for “warfare” and in terms of having much, much easier time to coordinate risks of “civilian activities”.
However, I am not sure whether the notion of singleton is well-defined; a system can look like a singleton from the outside and behave like a singleton most of the time, but it still needs to have plenty of non-trivial structure inside and is still likely to be a “Society of Mind” (just like most humans look like singular entities from the outside, but have plenty of non-trivial structure inside themselves and are “Societies of Mind”).
To compare, even the most totalitarian states (our imperfect approximations of singletons) have plenty of fractional warfare, and powerful fractions destroy each other all the time. So far those fractions have not used military weapons of mass destruction in those struggles, but this is mostly because those weapons have been relatively unwieldy.
And even without those considerations, experiments in search of new physics are tempting, and balancing risks and rewards of such experiments can easily go wrong even for a “true singleton”.