Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
(ETA: The point being that I agree with the parent and grandparent posts that people who won’t rationally discuss morality are often afraid of things like this. I’m just wondering whether the belief underlying that fear is true or false.)
On a trivial scale, I’ve revised quite a few opinions based on objective rational arguments that my action was causing harm in ways I had previously been unaware of. The example that immediately comes to mind is modifying my vocabulary to try and avoid offensive words. The concept of privilege and “isms” in general, really.
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure. If everyone obeys the same morality, they should be more ready to defend it, because they know they will be in majority.
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their
cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the
group can help organize efficient peer pressure.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressure merely requires having enough people on board and not having any particular individual on board. In those conversations, I always had the role of B, and I assumed, perhaps mistakenly, that A’s primary goal was to persuade me since A was talking to me. Thank you for the insight.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists.
“Any means possible” is a euphemism for “really big stick”!
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
Hm. It seems like there’s more to say about that.
For example, the peer pressure to participate in picking on low-status figures in a high-school class certainly appears to be strong, and not difficult to organize—indeed, it occurs spontaneously.
I suppose I’m willing to accept that those who refuse to participate aren’t “wrongdoers”, but I’m not sure why that should matter; if there’s a distinction between wrongdoers and other norm-violators you are calling out here, it would benefit from being called out more explicitly.
Conversely, I’m also willing to accept that picking on the low-status figures is the shared morality in this case, but in that case I think the whole conversation becomes less connotationally misleading if we talk about shared behavioral norms and leave the term “morality” (let alone “objective morality”) out of it.
I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too.
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.
There are occasional religious conversions, and if you follow the thread up to the link below I apparently got “syllogism” to give up on preference utilitarianism, whatever that is:
I suspect what people are afraid of s being caught out in holding an unarguable position
Both hypotheses make sense to me: perhaps they’re afraid that it won’t work to persuade people if they don’t defend it, and perhaps it’s simpler and they know they have no position to argue from but they still don’t want to lose.
For better or worse, I think Eugeine Nier was arguing a point about morality identical to one of yours (Peterdjones) and he started dodging my questions at:
If you are aware that people are dodging because they have an unarguable position, perhaps you don’t want to participate in that. Do you want to help him out and answer the question I asked there?
I’m inclined to believe you, but his biography on Wikipedia describes a long and varied life, and in a few minutes of examination I did not find any clear examples of arguments about morality persuading anybody to stop doing evil. I’m sure it’s in there somewhere. Which event(s) in his life are you talking about?
Sorry, if that’s all you have, it’s not what I’m looking for. What evil did he stop doing because he converted to Christianity? The worst things I see in the biography there were teaching rhetoric and waiting patiently for his 11 year old fiancee to turn 13 so he could marry her. Those activities were both consistent with cultural norms of the time. Neither of seem to have the right flavor to make me to want to try arguing morality with someone who is pointing a gun at my head.
He also gave up sleeping with various mistresses, however, given current culture, I doubt you think that is evil.
Arguing morality with someone who is holding a gun to your head doesn’t sound like a very smart thing to do. The most I have done while being held up was provide the assailant a set of scriptures with a number to call if he wanted to discuss morality while not holding a gun. If they are holding a gun or otherwise threatening current violence to you then that is usually not the time to be discussing morality as they are most likely not acting rationally.
Discussing morality with someone that is suicidal can sometimes help. Still, one should call for professional assistance if it is available.
One problem with arguing rationality with someone who as a gun to your head is time: a rational argument for a substantial change tends to take a fair amount of time. You might be able to convince someone with quick “sound bites”, but I’m not sure I’d really call that a rational argument.
There’s also the fear that if there’s no objective morality, if someone starts doing evil things, you couldn’t make them stop by argument.
Does anyone know of an example where arguing objective morality with someone who is doing evil things made them stop?
(ETA: The point being that I agree with the parent and grandparent posts that people who won’t rationally discuss morality are often afraid of things like this. I’m just wondering whether the belief underlying that fear is true or false.)
On a trivial scale, I’ve revised quite a few opinions based on objective rational arguments that my action was causing harm in ways I had previously been unaware of. The example that immediately comes to mind is modifying my vocabulary to try and avoid offensive words. The concept of privilege and “isms” in general, really.
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure. If everyone obeys the same morality, they should be more ready to defend it, because they know they will be in majority.
Without a shared morality, and it’s twin, hypocrisy, organizing peer pressure on wrongdoers is difficult.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressure merely requires having enough people on board and not having any particular individual on board. In those conversations, I always had the role of B, and I assumed, perhaps mistakenly, that A’s primary goal was to persuade me since A was talking to me. Thank you for the insight.
“Any means possible” is a euphemism for “really big stick”!
Hm. It seems like there’s more to say about that.
For example, the peer pressure to participate in picking on low-status figures in a high-school class certainly appears to be strong, and not difficult to organize—indeed, it occurs spontaneously.
I suppose I’m willing to accept that those who refuse to participate aren’t “wrongdoers”, but I’m not sure why that should matter; if there’s a distinction between wrongdoers and other norm-violators you are calling out here, it would benefit from being called out more explicitly.
Conversely, I’m also willing to accept that picking on the low-status figures is the shared morality in this case, but in that case I think the whole conversation becomes less connotationally misleading if we talk about shared behavioral norms and leave the term “morality” (let alone “objective morality”) out of it.
I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
OK, cool; thanks for clarifying.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.
Do people change their minds much about anything?
I suspect what people are afraid of s being caught out in holding an unarguable position
There are occasional religious conversions, and if you follow the thread up to the link below I apparently got “syllogism” to give up on preference utilitarianism, whatever that is:
http://lesswrong.com/lw/435/what_is_eliezer_yudkowskys_metaethical_theory/3yj3
Both hypotheses make sense to me: perhaps they’re afraid that it won’t work to persuade people if they don’t defend it, and perhaps it’s simpler and they know they have no position to argue from but they still don’t want to lose.
For better or worse, I think Eugeine Nier was arguing a point about morality identical to one of yours (Peterdjones) and he started dodging my questions at:
http://lesswrong.com/lw/5eh/what_is_metaethics/42ul
If you are aware that people are dodging because they have an unarguable position, perhaps you don’t want to participate in that. Do you want to help him out and answer the question I asked there?
Done.
Yes, this is a common occurrence. St. Augustine is, for instance, a well known example of such an occurrence.
I’m inclined to believe you, but his biography on Wikipedia describes a long and varied life, and in a few minutes of examination I did not find any clear examples of arguments about morality persuading anybody to stop doing evil. I’m sure it’s in there somewhere. Which event(s) in his life are you talking about?
Here is where it talks about it in the Wiki article: http://en.wikipedia.org/wiki/Augustine_of_Hippo#Christian_conversion
A full account is given in The Confessions of St. Augustine.
Sorry, if that’s all you have, it’s not what I’m looking for. What evil did he stop doing because he converted to Christianity? The worst things I see in the biography there were teaching rhetoric and waiting patiently for his 11 year old fiancee to turn 13 so he could marry her. Those activities were both consistent with cultural norms of the time. Neither of seem to have the right flavor to make me to want to try arguing morality with someone who is pointing a gun at my head.
He also gave up sleeping with various mistresses, however, given current culture, I doubt you think that is evil.
Arguing morality with someone who is holding a gun to your head doesn’t sound like a very smart thing to do. The most I have done while being held up was provide the assailant a set of scriptures with a number to call if he wanted to discuss morality while not holding a gun. If they are holding a gun or otherwise threatening current violence to you then that is usually not the time to be discussing morality as they are most likely not acting rationally.
Discussing morality with someone that is suicidal can sometimes help. Still, one should call for professional assistance if it is available.
One problem with arguing rationality with someone who as a gun to your head is time: a rational argument for a substantial change tends to take a fair amount of time. You might be able to convince someone with quick “sound bites”, but I’m not sure I’d really call that a rational argument.