This depends very much on what you mean by “better person”.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
Do you still do good things without hope of eternal reward?
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Instead, I think it shows that each of these actions are immoral but of different magnitudes.
Agreed, assuming that the actions are, in fact, immoral.
That said, does it really matter why I do nice things for people, as long as I do them ?
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
In my scenario, the answer is either “no”, or “not as effectively”.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
This action is not entirely analogous, … The more interesting question is...
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia...
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
If I wanted to go to X less, then my friend would want to go to X less.
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I can try again with another hypothetical. A girl wants to try ecstasy...
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Agreed, assuming that the actions are, in fact, immoral.
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.