(“perhaps they definitely cannot affect it”) This is true if you can do literally nothing about it, but the number of things on which you can have literally no effect are far outweighed by the number of things that holding this attitude will cause you to give up on. Do you think that you can have literally no effect on prison torture?
The effect isn’t literally zero, in the same way that probabilities are never literally zero. But I believe the effect I can have on reducing prison torture is vanishingly small.
But that’s besides the point. As long as the expected effect isn’t worth the effort (not just time, but dangers and losses due to the effort), I will make precisely zero effort: I will instead work on a different problem. There are many problems, I can only work on a few, and at the very least I needn’t feel unduly bad about the problems I’m not working on, and about the fact that I’m not working on them.
Even death itself is not something you can give up feeling bad about—nor was it a few centuries ago, because the societal effects of people trying to stop feeling bad about it then are damaging efforts to stop it now.
Take the POV of someone living a few centuries ago, knowing that death might be solvable but only in the distant future. On the one hand, you have some measure of influence over future societies’ fight against death—a very tiny measure, because you must multiply it by your uncertainty of what the future society will look like and how it might be influenced by its history. On the other hand, you have the certain knowledge that you and a billion of your contemporaries will feel much better if they e.g. don’t fear death as much because they believe in an afterlife, which also means they’ll believe that fighting death is a sin. It’s obvious to me what the correct choice is.
c. (“This suffering is needless”) This suffering is clearly not needless if it produces actions which can help resolve the problem.
As I said in the post: When you’re faced with something terrible and you’re not doing anything about it anyway, just look away. Defeat the implicit LW conditioning that tells you looking away from the suffering of others is wrong. It’s wrong only if it affects your actions, not your emotions.
I believe that by far most suffering of this kind never affects actions, and so is unnecessary. And a rationalist should be able to correctly identify cases where it is or isn’t.
Life ain’t easy. I don’t see how trying to look away helps.
When life ain’t easy for someone else, looking away helps you.
It’s true that we keep acting in the absence of immediate strong emotion, but do you really believe that we’d act just as strongly to resolve a problem whether we still felt strongly about it or not?
I believe at least 90% of strong empathetic emotions and of fears can be eliminated without directly, causally affecting any actions towards resolving problems less. There may be long-term effects of this mental posture of which I’m unaware, but I have no reason for believing such effects would reduce actions. At the very least, getting rid of negative emotions makes for a healthier and more free mental life, and feeling better in yourself is known to encourage positive actions.
I’m going to stop going point-for-point on this, and this will probably be my final post on the matter. But the gist of my argument is this:
You say that it’s reasonable to “look away”, to consciously try to disconnect your emotions from reality. This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality. In that sense, I consider it theoretically reasonable: epistemic rationality is ultimately only a sub-goal of instrumental rationality.
But unless you’re a perfect rationalist, it is extremely dangerous to have a policy of favoring instrumental rationality over epistemic rationality. It’s virtually impossible to lie to yourself in a way that is not contagious. Unless you have complete information about the universe and total knowledge of how to apply it, you can never be sure that the lie you told to cover up one unfortunate truth won’t catch you somewhere else—and when it’s a lie you’ve told yourself, a false thing you’ve willed yourself into believing, you can’t even keep the truth at the back of your mind to make sure you maintain correspondence with reality.
Yours is the logic of conversion, the argument that says you should abandon truth for religion if it seems likely to make you happier. Maybe this is the case—but only if you can be sure that reality will never come back and bite you in the ass. Because once you’ve given up that instinct for truth, you can’t get it back. A lie you tell to yourself is self-reinforcing and can’t be isolated. Most likely you will never be able to dig it out.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head. But you’re not perfect, you’re human—and humans can’t do that. One who tries to do so will find that their real, underlying, motivating emotions change to match the ones they consciously desire to feel—and in doing so alter their actions.
So your choice is this: either change your emotions to match reality, with all the suffering that entails; or ignore reality for the sake of your emotions, and sacrifice your moral code in doing so.
Sacrificing epistemology is not something you can do once you’ve awakened as a rationalist.
This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality.
One thing that you’re overlooking here is that the kind of self-modification Dan is talking about can’t be done unless you actually have strong epistemic rationality with respect to your emotions—strong enough to understand the judgment by which you arrived at the emotions in the first place.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head.
This is a misunderstanding of how emotions work. Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
Emotions are cached responses to situationally-salient values. Example: I don’t like exercising, but it produces another result I want later. The not-liking-exercise emotion is not actually fulfilling my values: it would be more useful—and more epistemically accurate—for me to experience an emotion in relation to exercise that gives greater weight to my longer-term values. Which of these emotions is epistemically correct?
If our brains actually used our real values in their entirety to arrive at decisions, it’d take too bloody long. So we use cached evaluations based on immediate information… which means our emotions are automatically and systematically biased against our long-term best interests, unless we consciously correct what’s in our caches on an ongoing basis.
So, there is no conflict here between the epistemic and instrumental: removing unnecessary negative emotion is simply correcting systemic biases of the underlying machinery to reflect our true values and desired outcomes, rather than overweighting what is easy to visualize or unconsciously learn.
Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
You have misunderstood my entire point. I know that emotions don’t naturally reflect values. The argument was over whether achieving your values requires you to change your emotions to reflect them, or if you can be equally motivated by values alone.
From the original post:
...you are horrified by the huge amounts of suffering. You have shut up and calculated, and the calculation output that you should feel 3^^^3 times as bad as over a stubbed toe. And a stubbed toe can be pretty bad.
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
Dan is in error, insofar as his argument implied that one should have one’s emotions conflict with one’s true values.
You, however are in error insofar as your arguments praise feeling bad as a path to doing good.
I agree with you that your emotions should reflect your values. OTOH, I agree with Dan that the optimal choice of emotion to reflect one’s values will rarely be feeling bad, unless there is some sort of social goal involved (such as bonding with a group through a shared experience of grief or outrage).
The effect isn’t literally zero, in the same way that probabilities are never literally zero. But I believe the effect I can have on reducing prison torture is vanishingly small.
But that’s besides the point. As long as the expected effect isn’t worth the effort (not just time, but dangers and losses due to the effort), I will make precisely zero effort: I will instead work on a different problem. There are many problems, I can only work on a few, and at the very least I needn’t feel unduly bad about the problems I’m not working on, and about the fact that I’m not working on them.
Take the POV of someone living a few centuries ago, knowing that death might be solvable but only in the distant future. On the one hand, you have some measure of influence over future societies’ fight against death—a very tiny measure, because you must multiply it by your uncertainty of what the future society will look like and how it might be influenced by its history. On the other hand, you have the certain knowledge that you and a billion of your contemporaries will feel much better if they e.g. don’t fear death as much because they believe in an afterlife, which also means they’ll believe that fighting death is a sin. It’s obvious to me what the correct choice is.
As I said in the post: When you’re faced with something terrible and you’re not doing anything about it anyway, just look away. Defeat the implicit LW conditioning that tells you looking away from the suffering of others is wrong. It’s wrong only if it affects your actions, not your emotions.
I believe that by far most suffering of this kind never affects actions, and so is unnecessary. And a rationalist should be able to correctly identify cases where it is or isn’t.
When life ain’t easy for someone else, looking away helps you.
I believe at least 90% of strong empathetic emotions and of fears can be eliminated without directly, causally affecting any actions towards resolving problems less. There may be long-term effects of this mental posture of which I’m unaware, but I have no reason for believing such effects would reduce actions. At the very least, getting rid of negative emotions makes for a healthier and more free mental life, and feeling better in yourself is known to encourage positive actions.
I’m going to stop going point-for-point on this, and this will probably be my final post on the matter. But the gist of my argument is this:
You say that it’s reasonable to “look away”, to consciously try to disconnect your emotions from reality. This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality. In that sense, I consider it theoretically reasonable: epistemic rationality is ultimately only a sub-goal of instrumental rationality.
But unless you’re a perfect rationalist, it is extremely dangerous to have a policy of favoring instrumental rationality over epistemic rationality. It’s virtually impossible to lie to yourself in a way that is not contagious. Unless you have complete information about the universe and total knowledge of how to apply it, you can never be sure that the lie you told to cover up one unfortunate truth won’t catch you somewhere else—and when it’s a lie you’ve told yourself, a false thing you’ve willed yourself into believing, you can’t even keep the truth at the back of your mind to make sure you maintain correspondence with reality.
Yours is the logic of conversion, the argument that says you should abandon truth for religion if it seems likely to make you happier. Maybe this is the case—but only if you can be sure that reality will never come back and bite you in the ass. Because once you’ve given up that instinct for truth, you can’t get it back. A lie you tell to yourself is self-reinforcing and can’t be isolated. Most likely you will never be able to dig it out.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head. But you’re not perfect, you’re human—and humans can’t do that. One who tries to do so will find that their real, underlying, motivating emotions change to match the ones they consciously desire to feel—and in doing so alter their actions.
So your choice is this: either change your emotions to match reality, with all the suffering that entails; or ignore reality for the sake of your emotions, and sacrifice your moral code in doing so.
Sacrificing epistemology is not something you can do once you’ve awakened as a rationalist.
One thing that you’re overlooking here is that the kind of self-modification Dan is talking about can’t be done unless you actually have strong epistemic rationality with respect to your emotions—strong enough to understand the judgment by which you arrived at the emotions in the first place.
This is a misunderstanding of how emotions work. Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
Emotions are cached responses to situationally-salient values. Example: I don’t like exercising, but it produces another result I want later. The not-liking-exercise emotion is not actually fulfilling my values: it would be more useful—and more epistemically accurate—for me to experience an emotion in relation to exercise that gives greater weight to my longer-term values. Which of these emotions is epistemically correct?
If our brains actually used our real values in their entirety to arrive at decisions, it’d take too bloody long. So we use cached evaluations based on immediate information… which means our emotions are automatically and systematically biased against our long-term best interests, unless we consciously correct what’s in our caches on an ongoing basis.
So, there is no conflict here between the epistemic and instrumental: removing unnecessary negative emotion is simply correcting systemic biases of the underlying machinery to reflect our true values and desired outcomes, rather than overweighting what is easy to visualize or unconsciously learn.
You have misunderstood my entire point. I know that emotions don’t naturally reflect values. The argument was over whether achieving your values requires you to change your emotions to reflect them, or if you can be equally motivated by values alone.
From the original post:
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
Dan is in error, insofar as his argument implied that one should have one’s emotions conflict with one’s true values.
You, however are in error insofar as your arguments praise feeling bad as a path to doing good.
I agree with you that your emotions should reflect your values. OTOH, I agree with Dan that the optimal choice of emotion to reflect one’s values will rarely be feeling bad, unless there is some sort of social goal involved (such as bonding with a group through a shared experience of grief or outrage).