I’m aware that there are problems with CEV (mainly: we’re probably not going to have enough time to figure out how to actually implement it before the Singularity, and the CEV is biased to exclude only the volition of humanity, which means that there may be a risk of the CEV allowing arbitrary amounts of cruelty to entities that don’t qualify as “human”)
Anyway, I’m aware that there are problems with CEV, but I still don’t know of any better plan.
Because of the extreme difficulty of actually implementing CEV, I am tempted to advocate the backup plan of coding a purely Utilitarian AI, maximizing pleasure and minimizing pain.An orgasmium shockwave is better than a lifeless universe. The idea would be to not release this AI unless it looks like we’re running out of time to implement CEV, but if we are running out of time, then we’re not likely to get much warning that we’re running out of time. And then there’s the complication that according to my current belief system (which I’m still very conflicted about) the orgasmium shockwave scenario is actually better than the CEV scenario, since it would result in greater total utility. But I’m nowhere near confident enough about this to actually advocate the plan of deliberately releasing a pure Utilitarian AI. And this plan has its own dangers, like… shudder… what if we get the utility formula wrong?
Oh, and one random idea I had to make CEV easier to actually implement: remove the restriction of the CEV not being allowed to simulate sentient minds. Just try to make sure that these sentient minds have at least a minimum standard of living. Or, if that’s too hard, and you somehow need to simulate minds that are actually suffering, you could save a backup copy of them, rather than deleting them, and after the CEV has finished applying its emergency first-aid to the human condition, you can reawaken these simulated minds, and give them full rights as citizens. There should be more than enough resources available in the universe for these minds to live happy, fulfilling lives. They might even be considered heroes, who endured a few moments of discomfort, and existential confusion, in order to help bring about a positive post-Singularity future. But still it somehow feels wrong for me to suggest a plan that involves the suffering of others. If it makes anyone feel anyone better about this suggestion, then I, personally, volunteer to experience a playback of a recording of all of the unpleasant experiences that these simulated minds have experienced, while the CEV was busy doing its thing. There, now I’m not heartlessly advocating a plan that involves the suffering of others, but no harm to myself. And I’m expecting that the amount of this suffering would be small enough that the amount of pleasure I could experience in the rest of my life, after I’m finished experiencing this playback, would vastly outweigh the suffering. It would be nice if there would be some way to guarantee this, but that would make the system more complicated, and the whole point of all this was to make the system less complicated.
I agree.
I’m aware that there are problems with CEV (mainly: we’re probably not going to have enough time to figure out how to actually implement it before the Singularity, and the CEV is biased to exclude only the volition of humanity, which means that there may be a risk of the CEV allowing arbitrary amounts of cruelty to entities that don’t qualify as “human”)
Anyway, I’m aware that there are problems with CEV, but I still don’t know of any better plan.
Because of the extreme difficulty of actually implementing CEV, I am tempted to advocate the backup plan of coding a purely Utilitarian AI, maximizing pleasure and minimizing pain.An orgasmium shockwave is better than a lifeless universe. The idea would be to not release this AI unless it looks like we’re running out of time to implement CEV, but if we are running out of time, then we’re not likely to get much warning that we’re running out of time. And then there’s the complication that according to my current belief system (which I’m still very conflicted about) the orgasmium shockwave scenario is actually better than the CEV scenario, since it would result in greater total utility. But I’m nowhere near confident enough about this to actually advocate the plan of deliberately releasing a pure Utilitarian AI. And this plan has its own dangers, like… shudder… what if we get the utility formula wrong?
Oh, and one random idea I had to make CEV easier to actually implement: remove the restriction of the CEV not being allowed to simulate sentient minds. Just try to make sure that these sentient minds have at least a minimum standard of living. Or, if that’s too hard, and you somehow need to simulate minds that are actually suffering, you could save a backup copy of them, rather than deleting them, and after the CEV has finished applying its emergency first-aid to the human condition, you can reawaken these simulated minds, and give them full rights as citizens. There should be more than enough resources available in the universe for these minds to live happy, fulfilling lives. They might even be considered heroes, who endured a few moments of discomfort, and existential confusion, in order to help bring about a positive post-Singularity future. But still it somehow feels wrong for me to suggest a plan that involves the suffering of others. If it makes anyone feel anyone better about this suggestion, then I, personally, volunteer to experience a playback of a recording of all of the unpleasant experiences that these simulated minds have experienced, while the CEV was busy doing its thing. There, now I’m not heartlessly advocating a plan that involves the suffering of others, but no harm to myself. And I’m expecting that the amount of this suffering would be small enough that the amount of pleasure I could experience in the rest of my life, after I’m finished experiencing this playback, would vastly outweigh the suffering. It would be nice if there would be some way to guarantee this, but that would make the system more complicated, and the whole point of all this was to make the system less complicated.
CEV is too vague to call a plan. It bothers me that people are dedicating themselves to pursuing a goal that hasn’t yet been defined.
That was part of my motivation for proposing an alternative.