I didn’t either; fortunately, no source has been presented, so I don’t need to believe he said that and can postulate that he actually said the opposite or was engaged in criticizing such a position.
I can confirm he said something like it. However, what he meant by it was that our emotions should be keyed to how we act, not how the universe is. We should be rewarded for acting to produce the best outcome possible. We don’t control what the universe is, just our actions, so we shouldn’t be made to feel bad (or good) because of something we couldn’t control. For example, if we imagine a situation where 10 people were going to die, but you managed to save 5 of them, your emotional state shouldn’t be sad, because they should reward the fact that you saved 5 people. Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500. Just because the people are better off you shouldn’t be emotionally rewarded, because you reduced the number who would be happy. If the best you can make the universe is horrible you shouldn’t be depressed about it, because it isn’t good to increase the amount of disutility in the universe and doesn’t incentivize acting to bring the best situation about. Conversely, if the worse you can do is pretty damn good, you shouldn’t be happy about it, because you shouldn’t incentivize leaving utility on the table. Basically, it’s an endorsement of virtue ethics for human-type minds.
Thanks, that is a deeper understanding than I got from it second—hand (though I did not think it meant wireheading). I understood it to warn having and reacting to false sense of control, which I often see, “accepting that there are (many) things you cannot change”.
Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500.
I’ve got no problem with being happy that a thousand people get a bunch of utility (assuming they are people for whom I have altruistic interest). I would not be glad about the fact that I somehow screwed up (or was unlucky) and prevented even more altruistic goodies but I could be glad (happy) that some action of mine or external cause resulted in the boon for the 1,000.
I have neither the need nor desire to rewire my emotions such that I could unload a can of Skinner on my ass.
I didn’t either; fortunately, no source has been presented, so I don’t need to believe he said that and can postulate that he actually said the opposite or was engaged in criticizing such a position.
I can confirm he said something like it. However, what he meant by it was that our emotions should be keyed to how we act, not how the universe is. We should be rewarded for acting to produce the best outcome possible. We don’t control what the universe is, just our actions, so we shouldn’t be made to feel bad (or good) because of something we couldn’t control. For example, if we imagine a situation where 10 people were going to die, but you managed to save 5 of them, your emotional state shouldn’t be sad, because they should reward the fact that you saved 5 people. Equivalently, you shouldn’t really be all that happy that a thousand people get something that makes them really happy when your actions reduced the number of people who received whatever it is by 500. Just because the people are better off you shouldn’t be emotionally rewarded, because you reduced the number who would be happy. If the best you can make the universe is horrible you shouldn’t be depressed about it, because it isn’t good to increase the amount of disutility in the universe and doesn’t incentivize acting to bring the best situation about. Conversely, if the worse you can do is pretty damn good, you shouldn’t be happy about it, because you shouldn’t incentivize leaving utility on the table. Basically, it’s an endorsement of virtue ethics for human-type minds.
Thanks, that is a deeper understanding than I got from it second—hand (though I did not think it meant wireheading). I understood it to warn having and reacting to false sense of control, which I often see, “accepting that there are (many) things you cannot change”.
I’ve got no problem with being happy that a thousand people get a bunch of utility (assuming they are people for whom I have altruistic interest). I would not be glad about the fact that I somehow screwed up (or was unlucky) and prevented even more altruistic goodies but I could be glad (happy) that some action of mine or external cause resulted in the boon for the 1,000.
I have neither the need nor desire to rewire my emotions such that I could unload a can of Skinner on my ass.