I’ve had similar ideas but my conception of such a utopia would differ slightly in that:
This early on (at least given how long the OC has been subjectively experiencing) I wouldn’t expect one to want to spend most time experiencing simulations stripped of their memory. As I’d expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn’t actually real (plus people will want simulations where they can kill simulated villains guilt free).
I personally could never be totally comfortable being totally at the mercy of the machinations of superintelligences and the protection of the singleton AGI. So I would get the singleton AI to make me a lesser superintelligence to specifically look out for my values/interests, which it should have no problem with if it’s actually aligned. Similarly I’d expect such an aligned singleton to allow the creation of “guardian angel” AGI’s for countless other people, provided said AI’s have stable values which are compatible with its aligned values.
I would expect most simulations to entail people’s guardian angel AI simply acting out the roles of all NPC’s with perfect verisimilitude, while obviously never suffering when they act out pain and the like. I’d also expect that many NPC’s one formed positive relationships with would at some point be seamlessly swapped with a newly created mind, provided the singleton AI considered their creation to be positive utility and they wouldn’t have issues with how they were created. I expect this to a major source of new minds such that the distant future will have many thousands of minds who were created as approximations of fictional characters, from all the people living out their fantasies in say Hogwarts for instance and then taking a bunch of its characters out of it.
PS: If I were working on a story like this (I’ve actually seriously considered it, and I get the sense we read and watch a lot of the same stuff like Isaac Arthur), I’d make mention of how many(most?) people don’t like reverting their level of intelligence, for similar reasons to why people would find the idea of being reverted to a young child’s intelligence level existentially terrifying.
This is important because it means that one should view adult human level intelligence as being a sort of “childhood” for +X% of human-level superintelligence. So essentially to maximize the amount of novel fun that one can experience (without forgetting things and repeating the same experiences repeatedly like a loop immortal) one should wait until you get bored of all there is to appreciate at your intelligence level (for the range of variance in mind design you’re comfortable with) before improving it slightly. This also means that unless you are willing to become a loop immortal, the speed you run your mind at will determine maybe within an order of magnitude or so how quickly you progress along the process of “maturing” into a superintelligence, unless you’re deliberately “growing up” faster than is generally advised.
Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won’t happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete.
I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There’s no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don’t see why they wouldn’t be part of the central system.
Finally, I do think you’re right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I appreciate the comment. You’ve made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you’d want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.
I think the whole point of a guardian angel AI only really makes sense if it isn’t an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don’t really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don’t think that people “deciding on some place to stay” is a stable state of affairs under an aligned superintelligence. Since I don’t think people will want to be loop immortals if they know they are heading towards that. Similarly I don’t even know if I would consider an AGI aligned if it didn’t try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.
I’ve had similar ideas but my conception of such a utopia would differ slightly in that:
This early on (at least given how long the OC has been subjectively experiencing) I wouldn’t expect one to want to spend most time experiencing simulations stripped of their memory. As I’d expect a simulation with perfect accuracy to initially be if anything easier to enjoy if you could relax knowing it wasn’t actually real (plus people will want simulations where they can kill simulated villains guilt free).
I personally could never be totally comfortable being totally at the mercy of the machinations of superintelligences and the protection of the singleton AGI. So I would get the singleton AI to make me a lesser superintelligence to specifically look out for my values/interests, which it should have no problem with if it’s actually aligned. Similarly I’d expect such an aligned singleton to allow the creation of “guardian angel” AGI’s for countless other people, provided said AI’s have stable values which are compatible with its aligned values.
I would expect most simulations to entail people’s guardian angel AI simply acting out the roles of all NPC’s with perfect verisimilitude, while obviously never suffering when they act out pain and the like. I’d also expect that many NPC’s one formed positive relationships with would at some point be seamlessly swapped with a newly created mind, provided the singleton AI considered their creation to be positive utility and they wouldn’t have issues with how they were created. I expect this to a major source of new minds such that the distant future will have many thousands of minds who were created as approximations of fictional characters, from all the people living out their fantasies in say Hogwarts for instance and then taking a bunch of its characters out of it.
PS: If I were working on a story like this (I’ve actually seriously considered it, and I get the sense we read and watch a lot of the same stuff like Isaac Arthur), I’d make mention of how many(most?) people don’t like reverting their level of intelligence, for similar reasons to why people would find the idea of being reverted to a young child’s intelligence level existentially terrifying.
This is important because it means that one should view adult human level intelligence as being a sort of “childhood” for +X% of human-level superintelligence. So essentially to maximize the amount of novel fun that one can experience (without forgetting things and repeating the same experiences repeatedly like a loop immortal) one should wait until you get bored of all there is to appreciate at your intelligence level (for the range of variance in mind design you’re comfortable with) before improving it slightly. This also means that unless you are willing to become a loop immortal, the speed you run your mind at will determine maybe within an order of magnitude or so how quickly you progress along the process of “maturing” into a superintelligence, unless you’re deliberately “growing up” faster than is generally advised.
Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won’t happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete.
I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There’s no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don’t see why they wouldn’t be part of the central system.
Finally, I do think you’re right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I appreciate the comment. You’ve made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you’d want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.
I think the whole point of a guardian angel AI only really makes sense if it isn’t an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don’t really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don’t think that people “deciding on some place to stay” is a stable state of affairs under an aligned superintelligence. Since I don’t think people will want to be loop immortals if they know they are heading towards that. Similarly I don’t even know if I would consider an AGI aligned if it didn’t try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.