I believe that the solution to the Fermi paradox is possibly (I don’t place any considerable strength in this belief, besides it’s a quite useless thing to think about) that physics has unlimited local depth. That is, each sufficiently intelligent AI with most of the likely goal systems arising from its development finds it more desirable to spend time configuring the tiny details of its local physical region (or the details of reality that have almost no impact on the non-local physical region), than going to the other regions of the universe and doing something with the rest of the matter. That also requires a way to protect itself without necessity to implement preventive offensive measures, so there should also be no way to seriously hurt a computation once it has digged itself sufficiently deep in the physics.
Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it’s a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It’s a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.
P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a “crazy thread” somewhat amends the problem.
In Stross’s novel “Accelerando”, even without the locally deeper physics, the AIs formed Matrioshka Brains and more or less ignored the rest of the universe because of communication difficulties—mainly reduced bandwidth but also time lags.
I believe that the solution to the Fermi paradox is possibly (I don’t place any considerable strength in this belief, besides it’s a quite useless thing to think about) that physics has unlimited local depth. That is, each sufficiently intelligent AI with most of the likely goal systems arising from its development finds it more desirable to spend time configuring the tiny details of its local physical region (or the details of reality that have almost no impact on the non-local physical region), than going to the other regions of the universe and doing something with the rest of the matter. That also requires a way to protect itself without necessity to implement preventive offensive measures, so there should also be no way to seriously hurt a computation once it has digged itself sufficiently deep in the physics.
Any reason AIs with goal systems referring to the larger universe would be unlikely?
Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it’s a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It’s a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.
P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a “crazy thread” somewhat amends the problem.
In Stross’s novel “Accelerando”, even without the locally deeper physics, the AIs formed Matrioshka Brains and more or less ignored the rest of the universe because of communication difficulties—mainly reduced bandwidth but also time lags.