You flick the switch, and find out that you are a component of the AI, now doomed to an unhappy eternity of answering stupid questions from the rest of the AI.
I’m sure the designer would approve of being modified to enjoy answering stupid questions. The designer might also approve of being cloned for the purpose of answering one question, and then being destroyed.
Unfortunately, it turns out that you’re Stalin. Sounds like 1-person CEV.
I had assumed that a new copy of the designer would be spawned for each decision, and shut down afterwards.
Although thinking about it, that might just doom you to a subjective eternity of listening to the AI explain what it’s done so far, in the anticipation that it’s going to ask you a question at some point.
You’d need a good theory of ems, consciousness and subjective probability to have any idea what you’d subjectively experience.
You flick the switch, and find out that you are a component of the AI, now doomed to an unhappy eternity of answering stupid questions from the rest of the AI.
This is a problem. But if this is the only problem, then it is significantly better than paperclip universe.
I’m sure the designer would approve of being modified to enjoy answering stupid questions. The designer might also approve of being cloned for the purpose of answering one question, and then being destroyed.
Unfortunately, it turns out that you’re Stalin. Sounds like 1-person CEV.
That is or requires a pretty fundamental change. How can you be sure it’s value-preserving?
I had assumed that a new copy of the designer would be spawned for each decision, and shut down afterwards.
Although thinking about it, that might just doom you to a subjective eternity of listening to the AI explain what it’s done so far, in the anticipation that it’s going to ask you a question at some point.
You’d need a good theory of ems, consciousness and subjective probability to have any idea what you’d subjectively experience.