I am afraid I cannot claim here any particularly noble motives.
In Jacobian’s text there are, basically, two decision points: the first one is deciding to do good, and the second one is deciding on a course of action. You lose empathy in between them. There are (at least) two ways to interpret this. In one when you decide “do good”, you make just a very generic decision to do some unspecified good. All the actual choices are at the “course of action” point. In another one at the first decision point you already decide what particular good do you want to work towards and then the second decision point is just the details of implementation.
I didn’t want to start dissecting Jacobian’s post at this level of detail, so I basically simplified it by saying that you lose your empathy before making some (but not necessarily all) choices. I don’t know if you want to classify it as “technically incorrect” :-/
You still haven’t made a single argument in favor of emotional empathy, other than conflating lack of emotional empathy with, in order of appearance: Stalinism, Nazism, witch hunting, fanaticism. None of this name calling was supported by any evidence re:empathy.
The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.
To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what’s necessary to achieve their good goals. We think they’re wrong, but that’s an outside view. As has been pointed out, the road to hell is never in need of repair.
Given this, it’s useful to have firebreaks, boundaries which serve to stop really determined people who think they’re doing good from doing too much evil. A major firebreak is emotional empathy—it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.
And, besides, I like humans more than I like optimization algorithms :-P
How about: doing evil (even inadvertently) requires coercion. Slavery, Nazis, tying a witch to a stake, you name it. Nothing effective altruists currently do is coercive (except to mosquitoes), so we’re probably good. However, if we come up with a world improvement plan that requires coercing somebody, we should A) hear their take on it and B) empathize with them for a bit. This isn’t a 100% perfect plan, but it seems to be a decent framework.
Some argument along these lines may work; but I don’t believe that doing evil requires coercion.
Suppose that for some reason I am filled with malice against you and wish to do you harm. Here are some things I can do that involve no coercion.
I know that you enjoy boating. I drill a small hole in your boat, and the next time you go out on the lake your boat sinks and you die.
I know that you are an alcoholic. I leave bottles of whisky around places you go, in the hope that it will inspire you to get drunk and get your life into a mess.
The law where we live is (as in many places) rather overstrict and I know that you—like almost everyone in the area—have committed a number of minor offences. I watch you carefully, make notes, and file a report with the police.
I get to know your wife, treat her really nicely, try to give her the impression that I have long been nursing a secret yearning for her. I hope that some day if your marriage hits an otherwise-navigable rocky patch, she will come to me for comfort and (entirely consensually) leave you for me.
I discover your political preferences and make a point of voting for candidates whose values and policies are opposed to them.
I put up posters near where you live, accusing you of horrible things that you haven’t in fact done.
I put up posters near where you live, accusing you of horrible things that you have in fact done.
None of these involves coercion unless you interpret that word very broadly. Several of them don’t, so far as I can see, involve coercion no matter how broadly you interpret it.
So if you want to be assured of not doing evil, you probably need more firebreaks besides “no coercion”.
I agree with gjm that evil does not necessarily require coercion. Contemplate, say, instigating a lynching.
The reason EAs don’t do any coercion is because they don’t have any power. But I don’t see anything in their line of reasoning which would stop them from coercing other people in case they do get some power. They are not libertarians.
I am afraid I cannot claim here any particularly noble motives.
In Jacobian’s text there are, basically, two decision points: the first one is deciding to do good, and the second one is deciding on a course of action. You lose empathy in between them. There are (at least) two ways to interpret this. In one when you decide “do good”, you make just a very generic decision to do some unspecified good. All the actual choices are at the “course of action” point. In another one at the first decision point you already decide what particular good do you want to work towards and then the second decision point is just the details of implementation.
I didn’t want to start dissecting Jacobian’s post at this level of detail, so I basically simplified it by saying that you lose your empathy before making some (but not necessarily all) choices. I don’t know if you want to classify it as “technically incorrect” :-/
You still haven’t made a single argument in favor of emotional empathy, other than conflating lack of emotional empathy with, in order of appearance: Stalinism, Nazism, witch hunting, fanaticism. None of this name calling was supported by any evidence re:empathy.
The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.
To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what’s necessary to achieve their good goals. We think they’re wrong, but that’s an outside view. As has been pointed out, the road to hell is never in need of repair.
Given this, it’s useful to have firebreaks, boundaries which serve to stop really determined people who think they’re doing good from doing too much evil. A major firebreak is emotional empathy—it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.
And, besides, I like humans more than I like optimization algorithms :-P
How about: doing evil (even inadvertently) requires coercion. Slavery, Nazis, tying a witch to a stake, you name it. Nothing effective altruists currently do is coercive (except to mosquitoes), so we’re probably good. However, if we come up with a world improvement plan that requires coercing somebody, we should A) hear their take on it and B) empathize with them for a bit. This isn’t a 100% perfect plan, but it seems to be a decent framework.
Some argument along these lines may work; but I don’t believe that doing evil requires coercion.
Suppose that for some reason I am filled with malice against you and wish to do you harm. Here are some things I can do that involve no coercion.
I know that you enjoy boating. I drill a small hole in your boat, and the next time you go out on the lake your boat sinks and you die.
I know that you are an alcoholic. I leave bottles of whisky around places you go, in the hope that it will inspire you to get drunk and get your life into a mess.
The law where we live is (as in many places) rather overstrict and I know that you—like almost everyone in the area—have committed a number of minor offences. I watch you carefully, make notes, and file a report with the police.
I get to know your wife, treat her really nicely, try to give her the impression that I have long been nursing a secret yearning for her. I hope that some day if your marriage hits an otherwise-navigable rocky patch, she will come to me for comfort and (entirely consensually) leave you for me.
I discover your political preferences and make a point of voting for candidates whose values and policies are opposed to them.
I put up posters near where you live, accusing you of horrible things that you haven’t in fact done.
I put up posters near where you live, accusing you of horrible things that you have in fact done.
None of these involves coercion unless you interpret that word very broadly. Several of them don’t, so far as I can see, involve coercion no matter how broadly you interpret it.
So if you want to be assured of not doing evil, you probably need more firebreaks besides “no coercion”.
I agree with gjm that evil does not necessarily require coercion. Contemplate, say, instigating a lynching.
The reason EAs don’t do any coercion is because they don’t have any power. But I don’t see anything in their line of reasoning which would stop them from coercing other people in case they do get some power. They are not libertarians.