For me, becoming able to like a new thing seems like a much more positive change than stopping liking an old thing. The latter—even if it would be beneficial overall—feels like an impairment, a harm.
If others feel the same way—I don’t know whether they do—then they would be less inclined to offer advice on how to impair yourself than on how to enlarge your range of pleasures. And if others are expected to feel the same way, advice-givers might refrain from offering advice that would be perceived as “how to impair yourself”.
(A perfectly rational agent would scarcely ever want to lose the ability to like something, since that would always lower their utility. The exceptions would be game-theory-ish ones where being known not to like something would help others not fear that they’d seize it. Of course, we are very far from being perfectly rational agents and for many of us it might well be beneficial overall to lose the ability to enjoy clickbait articles or sugary desserts or riding a motorcycle at 100mph.)
A perfectly rational agent would scarcely ever want to lose the ability to like something, since that would always lower their utility.
What is a perfectly rational self-modifying agent? I don’t think anyone has an answer to that, although surely it is something that MIRI studies. The same argument that proves that it is never rational to cease liking something, proves that it must always be rational to acquire a liking for anything. You end up with wireheading.
For me, becoming able to like a new thing seems like a much more positive change than stopping liking an old thing. The latter—even if it would be beneficial overall—feels like an impairment, a harm.
If others feel the same way—I don’t know whether they do—then they would be less inclined to offer advice on how to impair yourself than on how to enlarge your range of pleasures. And if others are expected to feel the same way, advice-givers might refrain from offering advice that would be perceived as “how to impair yourself”.
(A perfectly rational agent would scarcely ever want to lose the ability to like something, since that would always lower their utility. The exceptions would be game-theory-ish ones where being known not to like something would help others not fear that they’d seize it. Of course, we are very far from being perfectly rational agents and for many of us it might well be beneficial overall to lose the ability to enjoy clickbait articles or sugary desserts or riding a motorcycle at 100mph.)
I concur with gjm.
The difference between “I like X” and “I am addicted to X” might be relevant here.
What is a perfectly rational self-modifying agent? I don’t think anyone has an answer to that, although surely it is something that MIRI studies. The same argument that proves that it is never rational to cease liking something, proves that it must always be rational to acquire a liking for anything. You end up with wireheading.