I don’t need gains to be “eternal” or “non-fragile” to count. Remember our debate about the “Hansonian hell word”, where I pointed out that a few centuries of human minds living in plenty followed by aeons of alien minds isn’t really a “hell world” to me?
My conclusions about the world simply mean that I have centuries or decades instead of billions of years of minds I care about arranging matter. I don’t find such a shortening a convincing reason to embrace counterfeit utility with wire-heading.
So you’ve no reinforcing reasons to give up your own fleeting well-being for anyone else’s sake, especially that of far people? Well, at least that’s honest. But I still want to aim for more than temporary gratification when deciding what to do—partly because I still feel very irresponsible and worried when trying not to care about what I see in society.
Sure, of course you care. I meant, well, I’ll think of how to explain it. But basically I’m talking about quasi-religious values again. I have this nagging feeling that we’re hugely missing out on both satisfaction and morality for not understanding their true use.
Ok I hope you can write it out because it sounds interesting. But let me pose a query of my own.
You seem to think that over a period a few billion years means that real utility optimization should occur rather than going for wire-heading. And think that if limited to a few centuries counterfeit utility is better. How would you feel about 10 000 years of the values you cherish followed by a alien or empty universe? What about 10 million years?
If it makes you feel better if everything I cared was destined to disappear tomorrow I think I would go for some wire-heading, so I guess we are just on different spots on the same curve. Is this so?
I’m thinking. But keep in mind that I’m basically a would-be deontologist; I’m just not sure what my deontic ethics should be. If only I could get a consistent (?) and satisfying system, I’d be fine with missing out on direct utility. I know, for humans that’s as impossible as becoming an utility maximizer, because we’re the antithesis of “consistency”.
I don’t need gains to be “eternal” or “non-fragile” to count. Remember our debate about the “Hansonian hell word”, where I pointed out that a few centuries of human minds living in plenty followed by aeons of alien minds isn’t really a “hell world” to me?
My conclusions about the world simply mean that I have centuries or decades instead of billions of years of minds I care about arranging matter. I don’t find such a shortening a convincing reason to embrace counterfeit utility with wire-heading.
So you’ve no reinforcing reasons to give up your own fleeting well-being for anyone else’s sake, especially that of far people? Well, at least that’s honest. But I still want to aim for more than temporary gratification when deciding what to do—partly because I still feel very irresponsible and worried when trying not to care about what I see in society.
The set of minds I care about obviously includes my own but isn’t limited to it!
Sure, of course you care. I meant, well, I’ll think of how to explain it. But basically I’m talking about quasi-religious values again. I have this nagging feeling that we’re hugely missing out on both satisfaction and morality for not understanding their true use.
Ok I hope you can write it out because it sounds interesting. But let me pose a query of my own.
You seem to think that over a period a few billion years means that real utility optimization should occur rather than going for wire-heading. And think that if limited to a few centuries counterfeit utility is better. How would you feel about 10 000 years of the values you cherish followed by a alien or empty universe? What about 10 million years?
If it makes you feel better if everything I cared was destined to disappear tomorrow I think I would go for some wire-heading, so I guess we are just on different spots on the same curve. Is this so?
I’m thinking. But keep in mind that I’m basically a would-be deontologist; I’m just not sure what my deontic ethics should be. If only I could get a consistent (?) and satisfying system, I’d be fine with missing out on direct utility. I know, for humans that’s as impossible as becoming an utility maximizer, because we’re the antithesis of “consistency”.