For many reasons I think it’s better to remember to see a superintelligence as modeling the world (including people in it) on a level different from intentionality, and using concepts unnatural to a human. The world with a superintelligence in it, if you need to understand its impact on the world, doesn’t have any humans, any intelligent agents at all, not even the singleton itself in the model that singleton runs in its moments of decision. Only the singleton makes decisions, and with respect to those decisions everything else is stuff of its mind, the material that gets optimized, according to humane utility function. The utility function is ultimately over the stuff of reality, not over transhuman people or any kind of sentient beings. This underlies the perspective on singleton as new humane physics of the world.
The way we interpret the world in a singleton and actions of a singleton on the world is different from the way it interprets the world and makes decisions on it, even if a simplified model agrees with reality nine times out of ten. What the singleton builds can be interpreted back from our perspective as sentient beings, and again sentient beings that we interpret from the optimized stuff of reality, could from our perspective be seen as interpreting what’s going on as there being multiple sentient beings going around in a new world, learning, communicating, living their lives. They can even (be interpreted to) interpret the actions of the singleton as certain adjustments to the physics, to people’s minds, to objects in the world, but it’s not the level where the singleton’s decisions are being made. It’s the level on which they make their own decisions. Their decisions are determined by their cognitive algorithms, but the outcomes of their decisions are taken into account in arranging the conditions that allow those decisions to be made, even to be thought about, even to the options for thoughts of one agent that lead to thoughts of other agents after object-level interaction that lead to the outcome in question. It’s a perpetual worldwide Newcomb’s paradox in action, with singleton arranging everything it can to be right, including keeping a balance with unwanted interference, and unwanted awareness of interference, which is interference in its own right, and so on. You are the stuff of physics, and you determine what comes of your actions, but this time physics is not at all simple, in very delicate ways, and you consist of this superintelligent physics as well. I think that this perspective allows to see how the guiding process can be much more subtle than prohibiting things that fall in natural human or transhuman categories.
Of course, these human interpretations would apply to optimized future only if the singleton is tuned so perfectly as to produce something that can be described by them, and maybe not even then, because a creative surprise could show a better unexpected way.
For many reasons I think it’s better to remember to see a superintelligence as modeling the world (including people in it) on a level different from intentionality, and using concepts unnatural to a human. The world with a superintelligence in it, if you need to understand its impact on the world, doesn’t have any humans, any intelligent agents at all, not even the singleton itself in the model that singleton runs in its moments of decision. Only the singleton makes decisions, and with respect to those decisions everything else is stuff of its mind, the material that gets optimized, according to humane utility function. The utility function is ultimately over the stuff of reality, not over transhuman people or any kind of sentient beings. This underlies the perspective on singleton as new humane physics of the world.
The way we interpret the world in a singleton and actions of a singleton on the world is different from the way it interprets the world and makes decisions on it, even if a simplified model agrees with reality nine times out of ten. What the singleton builds can be interpreted back from our perspective as sentient beings, and again sentient beings that we interpret from the optimized stuff of reality, could from our perspective be seen as interpreting what’s going on as there being multiple sentient beings going around in a new world, learning, communicating, living their lives. They can even (be interpreted to) interpret the actions of the singleton as certain adjustments to the physics, to people’s minds, to objects in the world, but it’s not the level where the singleton’s decisions are being made. It’s the level on which they make their own decisions. Their decisions are determined by their cognitive algorithms, but the outcomes of their decisions are taken into account in arranging the conditions that allow those decisions to be made, even to be thought about, even to the options for thoughts of one agent that lead to thoughts of other agents after object-level interaction that lead to the outcome in question. It’s a perpetual worldwide Newcomb’s paradox in action, with singleton arranging everything it can to be right, including keeping a balance with unwanted interference, and unwanted awareness of interference, which is interference in its own right, and so on. You are the stuff of physics, and you determine what comes of your actions, but this time physics is not at all simple, in very delicate ways, and you consist of this superintelligent physics as well. I think that this perspective allows to see how the guiding process can be much more subtle than prohibiting things that fall in natural human or transhuman categories.
Of course, these human interpretations would apply to optimized future only if the singleton is tuned so perfectly as to produce something that can be described by them, and maybe not even then, because a creative surprise could show a better unexpected way.