I’m myself a bit suspicious if the argument for strong self-improvement is as compelling as it sounds though. Something you have to take into account is if it is possible to predict that a transcendence does leave your goals intact, e.g. can you be sure to still care about bananas after you went from chimphood to personhood.
Isn’t that exactly the argument against non-proven AI values in the first place?
If you expect AI-chimp to be worried that AI-superchimp won’t love bannanas , then you should be very worried about AI-chimp.
I don’t get what you’re saying about the paperclipper.
It is a reason not to transcend if you are not sure that you’ll still be you afterwards, i.e. keep your goals and values. I just wanted to point out that the argument runs both directions. It is an argument for the fragility of values and therefore the dangers of fooming but also an argument for the difficulty that could be associated with radically transforming yourself.
Isn’t that exactly the argument against non-proven AI values in the first place?
If you expect AI-chimp to be worried that AI-superchimp won’t love bannanas , then you should be very worried about AI-chimp.
I don’t get what you’re saying about the paperclipper.
It is a reason not to transcend if you are not sure that you’ll still be you afterwards, i.e. keep your goals and values. I just wanted to point out that the argument runs both directions. It is an argument for the fragility of values and therefore the dangers of fooming but also an argument for the difficulty that could be associated with radically transforming yourself.