I would prefer to be the sort of person who would mourn her family, in a way vaguely similar to the way that I prefer to be the sort of person who one-boxes on Newcomb’s problem. I would actually carry out this disposition in either case as part of being that kind of person.
Interesting perspective, though I don’t agree with it.
I would ideally prefer to be the sort of person who will do everything in his power to prevent bad things from happening, but who, if they happen anyway, will not further decrease utility by feeling miserable (apart from being a terminal disutility, grief tends to result in additional bad things happening both to oneself and the surviving people one cares about).
The Newcomb analogy basically suggests “what if you can’t have both?”. In that case, granted, I would rather have the former. And that is the trade-off evolution found: it’s as if it can’t trust us to protect people we care about, without holding over our heads a credible threat of harm to them being reflected in harm to ourselves.
But unlike Newcomb, I don’t think the trade-off here is logically necessary. Just because evolution wasn’t able to create minds that have it both ways, does not, it seems to me, preclude the possibility of such minds coming into existence in the future by other means.
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
Cognitive behavioral therapy tries to do this (and Alicorn’s luminosity techniques are similar in some ways).
My answer is that in that situation, it’s not about you. For many people, being miserable in response to a situation that ordinarily triggers feelings of intense sadness is a human value; to give up being sad when very sad things happen would be to lose a part of what makes me me. I can understand that you would want me to be happy anyway, but it’s more than a little intrusive of you to insist on dictating my emotional state in that way.
Fair point. I should say, rather, “make sure not to expect your loved ones to ‘help you or anyone else’” with their emotions; sometimes emotions are legitimately self-serving.
As long as they know I wouldn’t want them to be sad.
I don’t really see how that form of grief is self-serving; if you need to grieve to get over my death, fine, but isn’t it more self-serving to be happy?
Why? How would that misery help you or anyone else in any way? If I caught the plague and died I would not want my loved ones to be miserable.
I would prefer to be the sort of person who would mourn her family, in a way vaguely similar to the way that I prefer to be the sort of person who one-boxes on Newcomb’s problem. I would actually carry out this disposition in either case as part of being that kind of person.
Interesting perspective, though I don’t agree with it.
I would ideally prefer to be the sort of person who will do everything in his power to prevent bad things from happening, but who, if they happen anyway, will not further decrease utility by feeling miserable (apart from being a terminal disutility, grief tends to result in additional bad things happening both to oneself and the surviving people one cares about).
The Newcomb analogy basically suggests “what if you can’t have both?”. In that case, granted, I would rather have the former. And that is the trade-off evolution found: it’s as if it can’t trust us to protect people we care about, without holding over our heads a credible threat of harm to them being reflected in harm to ourselves.
But unlike Newcomb, I don’t think the trade-off here is logically necessary. Just because evolution wasn’t able to create minds that have it both ways, does not, it seems to me, preclude the possibility of such minds coming into existence in the future by other means.
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
Cognitive behavioral therapy tries to do this (and Alicorn’s luminosity techniques are similar in some ways).
My answer is that in that situation, it’s not about you. For many people, being miserable in response to a situation that ordinarily triggers feelings of intense sadness is a human value; to give up being sad when very sad things happen would be to lose a part of what makes me me. I can understand that you would want me to be happy anyway, but it’s more than a little intrusive of you to insist on dictating my emotional state in that way.
I don’t think he was trying to dictate anyone’s emotional state.
Fair point. I should say, rather, “make sure not to expect your loved ones to ‘help you or anyone else’” with their emotions; sometimes emotions are legitimately self-serving.
As long as they know I wouldn’t want them to be sad.
I don’t really see how that form of grief is self-serving; if you need to grieve to get over my death, fine, but isn’t it more self-serving to be happy?