Interesting perspective, though I don’t agree with it.
I would ideally prefer to be the sort of person who will do everything in his power to prevent bad things from happening, but who, if they happen anyway, will not further decrease utility by feeling miserable (apart from being a terminal disutility, grief tends to result in additional bad things happening both to oneself and the surviving people one cares about).
The Newcomb analogy basically suggests “what if you can’t have both?”. In that case, granted, I would rather have the former. And that is the trade-off evolution found: it’s as if it can’t trust us to protect people we care about, without holding over our heads a credible threat of harm to them being reflected in harm to ourselves.
But unlike Newcomb, I don’t think the trade-off here is logically necessary. Just because evolution wasn’t able to create minds that have it both ways, does not, it seems to me, preclude the possibility of such minds coming into existence in the future by other means.
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
Cognitive behavioral therapy tries to do this (and Alicorn’s luminosity techniques are similar in some ways).
Interesting perspective, though I don’t agree with it.
I would ideally prefer to be the sort of person who will do everything in his power to prevent bad things from happening, but who, if they happen anyway, will not further decrease utility by feeling miserable (apart from being a terminal disutility, grief tends to result in additional bad things happening both to oneself and the surviving people one cares about).
The Newcomb analogy basically suggests “what if you can’t have both?”. In that case, granted, I would rather have the former. And that is the trade-off evolution found: it’s as if it can’t trust us to protect people we care about, without holding over our heads a credible threat of harm to them being reflected in harm to ourselves.
But unlike Newcomb, I don’t think the trade-off here is logically necessary. Just because evolution wasn’t able to create minds that have it both ways, does not, it seems to me, preclude the possibility of such minds coming into existence in the future by other means.
(Whether and to what extent we could by means available today modify our existing minds to have it both ways is, granted, another question, to which I don’t currently have a definite answer.)
Cognitive behavioral therapy tries to do this (and Alicorn’s luminosity techniques are similar in some ways).