the information defining a self preserving agent must not be lost into entropy, and any attempt to reduce suffering by ending a life when that life would have continued to try to survive is fundamentally a violation that any safe ai system would try to prevent.
Very strongly disagree. If a future version of myself was convinced that it deserved to be tortured forever, I would infinitely prefer that my future self be terminated than have its (“my”) new values satisfied.
That’s symmetrical with: if a future version of yourself was convinced that it deserved to not exist forever, you would infinitely prefer that your future self be unsatisfied than have its (“your”) new existence terminated.
Minimizing suffering (NegUtilism) is an arbitrary moral imperative. A moral imperative to maximize happiness (PosUtilism) is at least as valid.
as for me, i’m happy to break that symmetry and say that i’m fairly negative-utilitarian. i’d override a future me’s wish to suffer, sooner than i’d override a future me’s wish to be not happy.
I’m not a negative utilitarian, for the reason you mention. If a future version of myself was convinced that it didn’t deserve to be happy, I’d also prefer that its (“my”) values be frustrated rather than satisfied in that case, too.
Very strongly disagree. If a future version of myself was convinced that it deserved to be tortured forever, I would infinitely prefer that my future self be terminated than have its (“my”) new values satisfied.
That’s symmetrical with: if a future version of yourself was convinced that it deserved to not exist forever, you would infinitely prefer that your future self be unsatisfied than have its (“your”) new existence terminated.
Minimizing suffering (NegUtilism) is an arbitrary moral imperative. A moral imperative to maximize happiness (PosUtilism) is at least as valid.
as for me, i’m happy to break that symmetry and say that i’m fairly negative-utilitarian. i’d override a future me’s wish to suffer, sooner than i’d override a future me’s wish to be not happy.
I’m not a negative utilitarian, for the reason you mention. If a future version of myself was convinced that it didn’t deserve to be happy, I’d also prefer that its (“my”) values be frustrated rather than satisfied in that case, too.