These kinds of “What if” statements followed by something of fundamentally unknowable probability...
Minor nitpick: these statements have a very low probability of being true due to the lack of evidence for them, not an unknowable probability of being true as your sentence would imply.
This works no matter who is right (or if neither of us are right).
Ok, but what about unfalsifiable (or incredibly unlikely to be falsified) claims ? Let’s imagine that I am a religious person, who believes that a). the afterlife exists, and b). the gods will reward people in this afterlife in proportion to the number of good deeds each person accomplished in his Earthly life.The exact nature of the reward doesn’t matter, whatever it is, I’d consider it awesome. Furthermore, let’s imagine that I believe c). no objective empirical evidence of this afterlife and these gods’ existence could ever be obtained; nonetheless, I believe in it wholeheartedly (perhaps the gods revealed the truth to me in an intensely subjective experience, or whatever). As a direct result of my beliefs, d). I am driven to become a better person and do more good things for more people, thus becoming generally nicer, etc.
In this scenario, should my belief be destroyed by the truth ?
Suppose we are neighbors. By some mixup, the power company is combining my electric bill to your own. You notice that your bill is unusually high, but you pay it anyway because you want electricity. In fact, you like electricity so much that you are happy to pay even the high bill to get continued power. Now, suppose that I knew all the details of the situation. Should I tell you about the error?
I think this case is pretty similar to the one you’ve described about the religion that makes you do good things. You pay my bill because you want a good for yourself. I am letting you incur a cost, that you may not want to, because it will benefit me.
I think in the electricity example I have some moral obligation to tell you our bills have been combined. I think this carries over to the religious example. There is a real benefit to me (and to society) to let you continue to labor under your false assumption that doing good deeds would result in magic rewards, but I still think it would be immoral to let this go on. I think the right thing to do would be to try and destroy your false belief with the truth and then try to convince you that altruism can be rewarding in and of itself. That way, you may still be an altruist, but you won’t be fooled into being one.
I think this case is pretty similar to the one you’ve described about the religion that makes you do good things.
Not entirely. In your example, the power bill is a zero-sum game; in order for you to gain a benefit (free power), someone has to experience a loss (I pay extra for your power in addition to mine). Is there a loss in my scenario, and if so, to whom ?
There is a real benefit to me (and to society) to let you continue to labor under your false assumption that doing good deeds would result in magic rewards, but I still think it would be immoral to let this go on.
Why do you think this would be immoral ? I could probably make a consequentialist argument that it would in fact be moral, but perhaps you’re using some other moral system ?
someone has to experience a loss (I pay extra for your power in addition to mine). Is there a loss in my scenario, and if so, to whom
The cost is to you. You are the one doing good deeds. I consider the time and effort (and money) you expend doing good deeds for other people to be the cost here.
Why do you think this would be immoral ?
My feeling is that this is an implicit corruption of your free will. You aren’t actually intending to pay my power, you are just doing it because you don’t realize you are. Similarly, in the religion example, what you actually intend to do is earn your way into heaven (or pay for your own power) but what you are actually doing is hard work to benefit others and you won’t go to heaven for it (paying for my electricity).
I don’t have the time to fully divulge my moral system here, but I think there is a class of actions which reduce the free will of other people. At the very extreme end of this class would be slavery “Do my work or I’ll hurt or kill you”. At the opposite end of the spectrum (but still a member of the same class) is something like letting people serve you, when they don’t intend to, because of a lie by omission.
One of the things I respect and value about human beings is their free will. By diminishing the free will of other people I would be diminishing the value of other human beings and I am calling that “immoral behavior”. This, I think, is why it is immoral to let you believe a lie which hurts you even if it helps me.
We might all benefit if we tricked Mark Zuckerburg into paying our power bills. He could afford to do so and to go on doing his thing and we would all be made better off. So, should we do so? If we should, why should we stop at the power bill? Why should we limit ourselves to tricking him? Why not just compel him through force?
The cost is to you. You are the one doing good deeds. I consider the time and effort (and money) you expend doing good deeds for other people to be the cost here.
Ah, I understand, that makes sense. In this case, the magnitude of the net loss/gain depends on whether “become a better person” is one of my goals. If it is, then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal. In this scenario, would you say that taking this tool away from me would be the right thing to do ?
My feeling is that this is an implicit corruption of your free will.
What do you mean by “free will” ? Different people use this term to mean very different things.
Why not just compel him through force?
This is different from tricking him [1]. When we trick someone in the manner we were discussing (i.e., by conning him), we aren’t just taking away his stuff—we are giving him happiness in return. By contrast, when we take his stuff away by force, we’re giving him nothing but pain. Thus, even if we somehow established that conning people is morally acceptable, it does not follow that robbing them is acceptable as well.
[1] As Sophie from Leverage points out in one of the episodes.
then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal
This depends very much on what you mean by “better person”. Returning a lost wallet because you know the pain of losing things and because you understand the wallet’s owner is a sapient being who will experience similar pain is the kind of thing a good person would do. Returning a lost wallet because you expect a reward is more of a morally neutral thing to do. So, if you are doing good deeds because you expect a heavenly reward then you aren’t really being a good person (according to me) - you are just performing actions you expect to get a reward for. I think this belief actually prohibits you from being a good person, because as long as you believe in it you can never be sure whether you are acting out of a desire to be good or out of a desire to go to heaven.
In this scenario, would you say that taking this tool away from me would be the right thing to do ?
I would. If you use this belief to trick yourself into believing you are a better person (see above) then this is just doubling down for me. False beliefs should be destroyed by the truth. I should first destroy the belief in the heavenly reward for good deeds and then let the truth test you. Do you still do good things without hope of eternal reward? If yes, then you are a good person. If not, then you aren’t and you never were.
What do you mean by “free will” ?
By “free will” I mean a person’s ability to choose the option they most prefer. So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences. I assume somewhere in his decision making algorithm is a routine that considers the strength of preferences of friends and that evaluation is used to modify his preference to eat at restaurant X. I do think I’d be inhibiting his free will if I were to say falsely that “Well, we can’t go to Y because it burned down” (or let him continue to believe this without correcting him). I am subverting free will by distorting the apparent available options. I think this also fits if you use threat of harm (“I’ll shoot you if we don’t go to X”) to remove an option from someone’s consideration.
by conning him), we aren’t just taking away his stuff—we are giving him happiness in return
I know a mentally handicapped person. I think its very likely I could trick this person out of their money. I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
It seems to me, if it is possible to trick Zuckerberg into paying my power bill then it is possible because he is gullible enough to believe my con. If it is possible for me to trick the mentally disabled, then it is possible because they are gullible enough for me to con. So, I don’t see why there should be any moral difference between tricking the mentally disabled out of their wealth and tricking Zuckerberg out of his. Nigerian email scams should be okay too, right?
I suppose there is some difference here in that Zuckerberg could afford to be conned out of a power bill or two whereas the average Nigerian scam victim cannot. I interpret this difference as being one of scale though. I think it would be worse to trick the elderly or the mentally disabled out of their life savings than it would to trick Zuckerberg out of the same number of dollars. This doesn’t mean that it is morally permissible to trick Zuckerberg out of any money though. Instead, I think it shows that each of these actions are immoral but of different magnitudes.
This depends very much on what you mean by “better person”.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
Do you still do good things without hope of eternal reward?
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Instead, I think it shows that each of these actions are immoral but of different magnitudes.
Agreed, assuming that the actions are, in fact, immoral.
That said, does it really matter why I do nice things for people, as long as I do them ?
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
In my scenario, the answer is either “no”, or “not as effectively”.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
This action is not entirely analogous, … The more interesting question is...
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia...
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
If I wanted to go to X less, then my friend would want to go to X less.
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I can try again with another hypothetical. A girl wants to try ecstasy...
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.
If there is no empirical evidence either way about a belief, how would one go about destroying it? Beliefs pay rent in anticipated experience, not anticipated actions.
In short, the religious person has adopted a terminal value of being a nicer person, but is confused an thinks this is an instrumental value in pursuit of the “real” terminal value of implementing the desires of a supernatural being. Epistemic rationality has no more to say about this terminal value than about any other terminal value.
If there is no empirical evidence either way about a belief, how would one go about destroying it?
One way you could go about destroying a belief like that is to use Ockham’s Razor: sure, it’s possible that all kinds of unfalsifiable beliefs are true, but why should you waste time in believing any of them, if they have no effect on anything ?
However, if the believer has some subjective evidence for the belief—f.ex., if he personally experienced the gods talking to him—then this attack cannot work. In this case, would you still say that his belief is “indestructible” ?
Minor nitpick: these statements have a very low probability of being true due to the lack of evidence for them, not an unknowable probability of being true as your sentence would imply.
Ok, but what about unfalsifiable (or incredibly unlikely to be falsified) claims ? Let’s imagine that I am a religious person, who believes that a). the afterlife exists, and b). the gods will reward people in this afterlife in proportion to the number of good deeds each person accomplished in his Earthly life.The exact nature of the reward doesn’t matter, whatever it is, I’d consider it awesome. Furthermore, let’s imagine that I believe c). no objective empirical evidence of this afterlife and these gods’ existence could ever be obtained; nonetheless, I believe in it wholeheartedly (perhaps the gods revealed the truth to me in an intensely subjective experience, or whatever). As a direct result of my beliefs, d). I am driven to become a better person and do more good things for more people, thus becoming generally nicer, etc.
In this scenario, should my belief be destroyed by the truth ?
Suppose we are neighbors. By some mixup, the power company is combining my electric bill to your own. You notice that your bill is unusually high, but you pay it anyway because you want electricity. In fact, you like electricity so much that you are happy to pay even the high bill to get continued power. Now, suppose that I knew all the details of the situation. Should I tell you about the error?
I think this case is pretty similar to the one you’ve described about the religion that makes you do good things. You pay my bill because you want a good for yourself. I am letting you incur a cost, that you may not want to, because it will benefit me.
I think in the electricity example I have some moral obligation to tell you our bills have been combined. I think this carries over to the religious example. There is a real benefit to me (and to society) to let you continue to labor under your false assumption that doing good deeds would result in magic rewards, but I still think it would be immoral to let this go on. I think the right thing to do would be to try and destroy your false belief with the truth and then try to convince you that altruism can be rewarding in and of itself. That way, you may still be an altruist, but you won’t be fooled into being one.
Not entirely. In your example, the power bill is a zero-sum game; in order for you to gain a benefit (free power), someone has to experience a loss (I pay extra for your power in addition to mine). Is there a loss in my scenario, and if so, to whom ?
Why do you think this would be immoral ? I could probably make a consequentialist argument that it would in fact be moral, but perhaps you’re using some other moral system ?
The cost is to you. You are the one doing good deeds. I consider the time and effort (and money) you expend doing good deeds for other people to be the cost here.
My feeling is that this is an implicit corruption of your free will. You aren’t actually intending to pay my power, you are just doing it because you don’t realize you are. Similarly, in the religion example, what you actually intend to do is earn your way into heaven (or pay for your own power) but what you are actually doing is hard work to benefit others and you won’t go to heaven for it (paying for my electricity).
I don’t have the time to fully divulge my moral system here, but I think there is a class of actions which reduce the free will of other people. At the very extreme end of this class would be slavery “Do my work or I’ll hurt or kill you”. At the opposite end of the spectrum (but still a member of the same class) is something like letting people serve you, when they don’t intend to, because of a lie by omission.
One of the things I respect and value about human beings is their free will. By diminishing the free will of other people I would be diminishing the value of other human beings and I am calling that “immoral behavior”. This, I think, is why it is immoral to let you believe a lie which hurts you even if it helps me.
We might all benefit if we tricked Mark Zuckerburg into paying our power bills. He could afford to do so and to go on doing his thing and we would all be made better off. So, should we do so? If we should, why should we stop at the power bill? Why should we limit ourselves to tricking him? Why not just compel him through force?
Ah, I understand, that makes sense. In this case, the magnitude of the net loss/gain depends on whether “become a better person” is one of my goals. If it is, then belief in this kind of afterlife basically acts as a powerful anti-akrasia aid, motivating me to achieve this goal. In this scenario, would you say that taking this tool away from me would be the right thing to do ?
What do you mean by “free will” ? Different people use this term to mean very different things.
This is different from tricking him [1]. When we trick someone in the manner we were discussing (i.e., by conning him), we aren’t just taking away his stuff—we are giving him happiness in return. By contrast, when we take his stuff away by force, we’re giving him nothing but pain. Thus, even if we somehow established that conning people is morally acceptable, it does not follow that robbing them is acceptable as well.
[1] As Sophie from Leverage points out in one of the episodes.
This depends very much on what you mean by “better person”. Returning a lost wallet because you know the pain of losing things and because you understand the wallet’s owner is a sapient being who will experience similar pain is the kind of thing a good person would do. Returning a lost wallet because you expect a reward is more of a morally neutral thing to do. So, if you are doing good deeds because you expect a heavenly reward then you aren’t really being a good person (according to me) - you are just performing actions you expect to get a reward for. I think this belief actually prohibits you from being a good person, because as long as you believe in it you can never be sure whether you are acting out of a desire to be good or out of a desire to go to heaven.
I would. If you use this belief to trick yourself into believing you are a better person (see above) then this is just doubling down for me. False beliefs should be destroyed by the truth. I should first destroy the belief in the heavenly reward for good deeds and then let the truth test you. Do you still do good things without hope of eternal reward? If yes, then you are a good person. If not, then you aren’t and you never were.
By “free will” I mean a person’s ability to choose the option they most prefer. So, if I tell my friend I want to eat at restaurant X—I don’t think I’m inhibiting his free will. I do hope I’m influencing his preferences. I assume somewhere in his decision making algorithm is a routine that considers the strength of preferences of friends and that evaluation is used to modify his preference to eat at restaurant X. I do think I’d be inhibiting his free will if I were to say falsely that “Well, we can’t go to Y because it burned down” (or let him continue to believe this without correcting him). I am subverting free will by distorting the apparent available options. I think this also fits if you use threat of harm (“I’ll shoot you if we don’t go to X”) to remove an option from someone’s consideration.
I know a mentally handicapped person. I think its very likely I could trick this person out of their money. I could con him with a lie that is very liable to make him happy but would result in me getting all of his money and his stuff. What is your moral evaluation of this action?
It seems to me, if it is possible to trick Zuckerberg into paying my power bill then it is possible because he is gullible enough to believe my con. If it is possible for me to trick the mentally disabled, then it is possible because they are gullible enough for me to con. So, I don’t see why there should be any moral difference between tricking the mentally disabled out of their wealth and tricking Zuckerberg out of his. Nigerian email scams should be okay too, right?
I suppose there is some difference here in that Zuckerberg could afford to be conned out of a power bill or two whereas the average Nigerian scam victim cannot. I interpret this difference as being one of scale though. I think it would be worse to trick the elderly or the mentally disabled out of their life savings than it would to trick Zuckerberg out of the same number of dollars. This doesn’t mean that it is morally permissible to trick Zuckerberg out of any money though. Instead, I think it shows that each of these actions are immoral but of different magnitudes.
In this scenario, I mean, “someone who believes that doing nice things for people is a valuable goal, and who strives to act in accordance with this goal”. That said, does it really matter why I do nice things for people, as long as I do them ? Outside observers can’t tell what I’m thinking, after all, only what I’m doing.
In my scenario, the answer is either “no”, or “not as effectively”. I would like to do good things, but a powerful case of akrasia prevents me from doing them most of the time. Believing in the eternal reward cancels out the akrasia.
In this case, “free will” is a matter of degree. Sure, you aren’t inhibiting your friend’s choices by force, but you are still affecting them. Left to his own devices, he would’ve chosen restaurant Y—but you caused him to choose restaurant X, instead.
This action is not entirely analogous, because, while your victim might experience a temporary boost in happiness, he will experience unhappiness once he finds out that his stuff is gone, and that you tricked him. Thus, the total amount of happiness he experiences throughout his life will undergo a net decrease.
The more interesting question is, “what if I could con the person in such a way that will grant him sustained happiness ?” I am not sure whether doing so would be moral or not; but I’m also not entirely sure whether such a feat is even possible.
Agreed, assuming that the actions are, in fact, immoral.
From an economics standpoint it doesn’t matter. From a morality standpoint I would say it is all that does matter.
Consider, your friend asks you to get a cup of coffee—with sugar please! You go make the coffee and put in a healthy amount of the white powder. Unknown to you, this isn’t sugar, it is cyanide. Your friend drinks the coffee and falls down dead. What is your moral culpability here?
In a second instance, someone who thinks of you as a friend asks you for a cup of coffee—with sugar please! You actually aren’t this person’s friend though, you hate them. You make the cup of coffee, but instead of putting the sugar in it, you go to the back room, where you usually keep your cyanide powder. You find a bag of the white powder and put a large quantity into the coffee. Unknown to you, this isn’t cyanide, it has been switched with sugar. Your enemy drinks the coffee and enjoys it. What is your moral culpability here?
From the strict, bottom line, standpoint, you are a murderer in the first case and totally innocent in the second. And yet, that doesn’t feel right. Your intent in the first case was to help a friend. I would say that you have no moral culpability for his death. In the second case, your intent was to kill a person. I would say you bear the same moral culpability you would had you actually succeeded.
I think this example shows that what matters is not the consequences of your actions, but your intent when you take those actions. As such, if your intent on doing good is to benefit yourself I think it is fair to say that that is morally neutral (or at least less moral than it could be). If you intend simply to do good, then I think your actions are morally good, even if the consequences are not.
I would say this is the light of truth shattering your illusion about being a good person then. Maybe that realization will drive you to overcome the akrasia and you can become a good person in fact as well in your desires.
What I hope is happening is that my friend’s preferences include a variable which account for the preferences of his friends. That way, when I tell him where I want to go, I am informing his decision making algorithm without actually changing his preferences. If I wanted to go to X less, then my friend would want to go to X less.
Agreed. I don’t think this case would be moral though (though it would be a closer fit to the other situation). I think it still qualifies as a usurpation of another person’s free will and therefore is still immoral even if it makes people happy.
I can try again with another hypothetical. A girl wants to try ecstasy. She approaches a drug dealer, explains she has never tried it but would like to. The drug dealer supplies her with a pill which she takes. This isn’t ecstasy though, it is rohypnol. The girl blacks out and the drug dealer rapes her while she is unconscious, then cleans her up and leaves her on a couch. The girl comes to. Ecstasy wasn’t quite like it was described to her, but she is proud of herself for being adventurous and for trying new things. She isn’t some square who is too afraid to try recreational drugs and she will believe this about herself and attach a good feeling to this for the rest of her life. Has anyone done anything wrong here? The drug dealer was sexually gratified and the girl feels fulfilled in her experimentation. This feels like a case where every party is made happier and yet, I would still say that the drug dealer has done something immoral, even if he knew for sure how the girl would react.
From whose point of view ? If you are committed to poisoning your hapless friend, then presumably you either don’t care about morality, or you’d determined that this action would be sufficiently moral. If, on the other hand, I am attempting to evaluate the morality of your actions, then I can only evaluate the actions you did, in fact, perform (because I can’t read your mind). Thus, if you gave your friend a cup of tea with sugar in it, and, after he drank it, you refrained from exclaiming “This cannot be ! So much cyanide would kill any normal man !”—then I would conclude that you’re just a nice guy who gives sugared tea to people.
I do agree with you that intent matters in the opposite case; this is how we can differentiate murder from manslaughter.
Maybe it won’t, though. Thus, we have traded some harmless delusions of goodness for a markedly reduced expected value of my actions in the future (I might still do good deeds, but the probability of this happening is lower). Did society really win anything ?
Sounds like this is still mind control, just to a (much) lesser degree. Instead of altering your friend’s preferences directly, you’re exploiting your knowledge of his preference table, but the principle is the same. You could’ve just as easily said, “I know that my friend wants to avoid pain, so if I threaten him with pain unless he goes to X less, then he’d want to go to X less”.
I don’t think this scenario is entirely analogous either, though it’s much closer. In this example, there was a very high probability that the girl sustained severe lasting damage (STDs, pregnancy, bruising, drug overdose or allergy, etc.). Less importantly, the girl received some misleading information about drugs, which may cause her to make harmful decisions in the future. Even if none of these things happened in this specific case, the probability of them happening is relatively high. Thus, we would not want to live in a society where acting like the drug dealer did is considered moral.
If there is no empirical evidence either way about a belief, how would one go about destroying it? Beliefs pay rent in anticipated experience, not anticipated actions.
In short, the religious person has adopted a terminal value of being a nicer person, but is confused an thinks this is an instrumental value in pursuit of the “real” terminal value of implementing the desires of a supernatural being. Epistemic rationality has no more to say about this terminal value than about any other terminal value.
One way you could go about destroying a belief like that is to use Ockham’s Razor: sure, it’s possible that all kinds of unfalsifiable beliefs are true, but why should you waste time in believing any of them, if they have no effect on anything ?
However, if the believer has some subjective evidence for the belief—f.ex., if he personally experienced the gods talking to him—then this attack cannot work. In this case, would you still say that his belief is “indestructible” ?