If you don’t, you’re really going to regret it in a million years.
I’m rather skeptical about that, even conditioning on Ezekiel being around to care. I expect that the difference between him having his current preferences and his current preferences+more caring about future preferences will not result in a significant difference in the outcome the future Ezekiel will experience.
The chance of human augmentation reaching that level within my lifespan (or even within my someone’s-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you’re so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years’ time?
More seriously: Seeing as my current brain finds regret unpleasant, that’s something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I’m not so sure—even if I knew I were going to undergo augmentation.
If you don’t, you’re really going to regret it in a million years.
I’m rather skeptical about that, even conditioning on Ezekiel being around to care. I expect that the difference between him having his current preferences and his current preferences+more caring about future preferences will not result in a significant difference in the outcome the future Ezekiel will experience.
The chance of human augmentation reaching that level within my lifespan (or even within my someone’s-looking-after-my-frozen-brain-span) is, by my estimate, vanishingly low. But if you’re so sure, could I borrow money from you and pay you back some ludicrously high amount in a million years’ time?
More seriously: Seeing as my current brain finds regret unpleasant, that’s something that reduces to my current terminal values anyway. I do consider transhuman-me close enough to current-me that I want it to be happy. But where their terminal values actually differ, I’m not so sure—even if I knew I were going to undergo augmentation.