Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won’t happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete.
I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There’s no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don’t see why they wouldn’t be part of the central system.
Finally, I do think you’re right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I appreciate the comment. You’ve made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you’d want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.
I think the whole point of a guardian angel AI only really makes sense if it isn’t an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don’t really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don’t think that people “deciding on some place to stay” is a stable state of affairs under an aligned superintelligence. Since I don’t think people will want to be loop immortals if they know they are heading towards that. Similarly I don’t even know if I would consider an AGI aligned if it didn’t try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.
Yeah, this makes sense. However, I can honestly see myself reverting my intelligence a bit at different junctures, the same way I like to replay video games at greater difficulty. The main reason I am scared of reverting my intelligence now is that I have no guarantee of security that something awful won’t happen to me. With my current ability, I can be pretty confident that no one is going to really take advantage of me. If I were a child again, with no protection or less intelligence, I can easily imagine coming to harm because of my naivete.
I also think singleton AI is inevitable (and desirable). This is simply because it is stable. There’s no conflict between superintelligences. I do agree with the idea of a Guardian Angel type AI, but I think it would still be an offshoot of that greater singleton entity. For the most part, I think most people would forget about the singleton AI and just perceive it as part of the universe the same way gravity is part of the universe. Guardian Angels could be a useful construct, but I don’t see why they wouldn’t be part of the central system.
Finally, I do think you’re right about not wanting to erase memories for entering a simulation. I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I appreciate the comment. You’ve made me think a lot. The key idea behind this utopia is the idea of choice. You can basically go anywhere, do anything. Everyone will have different levels of comfort with the idea of altering their identity, experience, or impact. If you’d want to live exactly in the year 2023 again, there would be a physical, earth-like planet where you could do that! I think this sets a good baseline so that no one is unhappy.
I think the whole point of a guardian angel AI only really makes sense if it isn’t an offshoot of the central AGI. After all if you trusted the singleton enough to want a guardian angel AI, then you will want it to be as independent from the singleton as is allowed. Whereas if you do trust the singleton AI (because say you grew up after the singularity) then I don’t really see the point of a guardian angel AI.
>I think there would be levels, and most people would want to stay at a pretty normal level and would move to more extreme levels slowly before deciding on some place to stay.
I also disagree with this insofar as as I don’t think that people “deciding on some place to stay” is a stable state of affairs under an aligned superintelligence. Since I don’t think people will want to be loop immortals if they know they are heading towards that. Similarly I don’t even know if I would consider an AGI aligned if it didn’t try ensure people understood the danger of becoming a loop immortal and try to nudge people away from it.
Though I really want to see some surveys of normal people to confirm my suspicions that most people find the idea of being an infinitely repeating loop immortal existentially horrifying.