then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal
This depends very much on what you mean by “better person”. Returning a lost wallet because you know the pain of losing things and because you understand the wallet’s owner is a sapient being who will experience similar pain is the kind of thing a good person would do. Returning a lost wallet because you expect a reward is more of a morally neutral thing to do. So, if you are doing good deeds because you expect a heavenly reward then you aren’t really being a good person (according to me) - you are just performing actions you expect to get a reward for. I think this belief actually prohibits you from being a good person, because as long as you believe in it you can never be sure whether you are acting out of a desire to be good or out of a desire to go to heaven.
In this scenario, would you say that taking this tool away from me would be the right thing to do ?
I would. If you use this belief to trick yourself into believing you are a better person (see above) then this is just doubling down for me. False beliefs should be destroyed by the truth. I should first destroy the belief in the heavenly reward for good deeds and then let the truth test you. Do you still do good things without hope of eternal reward? If yes, then you are a good person. If not, then you aren’t and you never were.
What do you mean by “free will” ?
By “free will” I mean a person’s ability to choose the option they most prefer. So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences. I assume somewhere in his decision making algorithm is a routine that considers the strength of preferences of friends and that evaluation is used to modify his preference to eat at restaurant X. I do think I’d be inhibiting his free will if I were to say falsely that “Well, we can’t go to Y because it burned down” (or let him continue to believe this without correcting him). I am subverting free will by distorting the apparent available options. I think this also fits if you use threat of harm (“I’ll shoot you if we don’t go to X”) to remove an option from someone’s consideration.
by conning him), we aren’t just taking away his stuff—we are giving him happiness in return
I know a mentally handicapped person. I think its very likely I could trick this person out of their money. I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
It seems to me, if it is possible to trick Zuckerberg into paying my power bill then it is possible because he is gullible enough to believe my con. If it is possible for me to trick the mentally disabled, then it is possible because they are gullible enough for me to con. So, I don’t see why there should be any moral difference between tricking the mentally disabled out of their wealth and tricking Zuckerberg out of his. Nigerian email scams should be okay too, right?
I suppose there is some difference here in that Zuckerberg could afford to be conned out of a power bill or two whereas the average Nigerian scam victim cannot. I interpret this difference as being one of scale though. I think it would be worse to trick the elderly or the mentally disabled out of their life savings than it would to trick Zuckerberg out of the same number of dollars. This doesn’t mean that it is morally permissible to trick Zuckerberg out of any money though. Instead, I think it shows that each of these actions are immoral but of different magnitudes.
This depends very much on what you mean by “better person”.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
Do you still do good things without hope of eternal reward?
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Instead, I think it shows that each of these actions are immoral but of different magnitudes.
Agreed, assuming that the actions are, in fact, immoral.
That said, does it really matter why I do nice things for people, as long as I do them ?
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
In my scenario, the answer is either “no”, or “not as effectively”.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
This action is not entirely analogous, … The more interesting question is...
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia...
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
If I wanted to go to X less, then my friend would want to go to X less.
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I can try again with another hypothetical. A girl wants to try ecstasy...
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.
This depends very much on what you mean by “better person”. Returning a lost wallet because you know the pain of losing things and because you understand the wallet’s owner is a sapient being who will experience similar pain is the kind of thing a good person would do. Returning a lost wallet because you expect a reward is more of a morally neutral thing to do. So, if you are doing good deeds because you expect a heavenly reward then you aren’t really being a good person (according to me) - you are just performing actions you expect to get a reward for. I think this belief actually prohibits you from being a good person, because as long as you believe in it you can never be sure whether you are acting out of a desire to be good or out of a desire to go to heaven.
I would. If you use this belief to trick yourself into believing you are a better person (see above) then this is just doubling down for me. False beliefs should be destroyed by the truth. I should first destroy the belief in the heavenly reward for good deeds and then let the truth test you. Do you still do good things without hope of eternal reward? If yes, then you are a good person. If not, then you aren’t and you never were.
By “free will” I mean a person’s ability to choose the option they most prefer. So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences. I assume somewhere in his decision making algorithm is a routine that considers the strength of preferences of friends and that evaluation is used to modify his preference to eat at restaurant X. I do think I’d be inhibiting his free will if I were to say falsely that “Well, we can’t go to Y because it burned down” (or let him continue to believe this without correcting him). I am subverting free will by distorting the apparent available options. I think this also fits if you use threat of harm (“I’ll shoot you if we don’t go to X”) to remove an option from someone’s consideration.
I know a mentally handicapped person. I think its very likely I could trick this person out of their money. I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
It seems to me, if it is possible to trick Zuckerberg into paying my power bill then it is possible because he is gullible enough to believe my con. If it is possible for me to trick the mentally disabled, then it is possible because they are gullible enough for me to con. So, I don’t see why there should be any moral difference between tricking the mentally disabled out of their wealth and tricking Zuckerberg out of his. Nigerian email scams should be okay too, right?
I suppose there is some difference here in that Zuckerberg could afford to be conned out of a power bill or two whereas the average Nigerian scam victim cannot. I interpret this difference as being one of scale though. I think it would be worse to trick the elderly or the mentally disabled out of their life savings than it would to trick Zuckerberg out of the same number of dollars. This doesn’t mean that it is morally permissible to trick Zuckerberg out of any money though. Instead, I think it shows that each of these actions are immoral but of different magnitudes.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Agreed, assuming that the actions are, in fact, immoral.
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.