Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it’s a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It’s a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.
P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a “crazy thread” somewhat amends the problem.
Something akin to the functionalist position: if you accept living within a simulated world, you may also accept living within a simulated world hosted on computation running in the depths of local physics, if it’s a more efficient option than going outside; extend that to a general goal system. Of course, some things may really care about the world on the surface, but they may be overwhelmingly unlikely to result from the processes that lead to the construction of AIs converging on a stable goal structure. It’s a weak argument, as I said the whole point is weak, but it nonetheless looks like a possibility.
P.S. I realize we are going strongly against the ban on AGI and Singularity, but I hope this being a “crazy thread” somewhat amends the problem.