I’m going to stop going point-for-point on this, and this will probably be my final post on the matter. But the gist of my argument is this:
You say that it’s reasonable to “look away”, to consciously try to disconnect your emotions from reality. This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality. In that sense, I consider it theoretically reasonable: epistemic rationality is ultimately only a sub-goal of instrumental rationality.
But unless you’re a perfect rationalist, it is extremely dangerous to have a policy of favoring instrumental rationality over epistemic rationality. It’s virtually impossible to lie to yourself in a way that is not contagious. Unless you have complete information about the universe and total knowledge of how to apply it, you can never be sure that the lie you told to cover up one unfortunate truth won’t catch you somewhere else—and when it’s a lie you’ve told yourself, a false thing you’ve willed yourself into believing, you can’t even keep the truth at the back of your mind to make sure you maintain correspondence with reality.
Yours is the logic of conversion, the argument that says you should abandon truth for religion if it seems likely to make you happier. Maybe this is the case—but only if you can be sure that reality will never come back and bite you in the ass. Because once you’ve given up that instinct for truth, you can’t get it back. A lie you tell to yourself is self-reinforcing and can’t be isolated. Most likely you will never be able to dig it out.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head. But you’re not perfect, you’re human—and humans can’t do that. One who tries to do so will find that their real, underlying, motivating emotions change to match the ones they consciously desire to feel—and in doing so alter their actions.
So your choice is this: either change your emotions to match reality, with all the suffering that entails; or ignore reality for the sake of your emotions, and sacrifice your moral code in doing so.
Sacrificing epistemology is not something you can do once you’ve awakened as a rationalist.
This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality.
One thing that you’re overlooking here is that the kind of self-modification Dan is talking about can’t be done unless you actually have strong epistemic rationality with respect to your emotions—strong enough to understand the judgment by which you arrived at the emotions in the first place.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head.
This is a misunderstanding of how emotions work. Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
Emotions are cached responses to situationally-salient values. Example: I don’t like exercising, but it produces another result I want later. The not-liking-exercise emotion is not actually fulfilling my values: it would be more useful—and more epistemically accurate—for me to experience an emotion in relation to exercise that gives greater weight to my longer-term values. Which of these emotions is epistemically correct?
If our brains actually used our real values in their entirety to arrive at decisions, it’d take too bloody long. So we use cached evaluations based on immediate information… which means our emotions are automatically and systematically biased against our long-term best interests, unless we consciously correct what’s in our caches on an ongoing basis.
So, there is no conflict here between the epistemic and instrumental: removing unnecessary negative emotion is simply correcting systemic biases of the underlying machinery to reflect our true values and desired outcomes, rather than overweighting what is easy to visualize or unconsciously learn.
Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
You have misunderstood my entire point. I know that emotions don’t naturally reflect values. The argument was over whether achieving your values requires you to change your emotions to reflect them, or if you can be equally motivated by values alone.
From the original post:
...you are horrified by the huge amounts of suffering. You have shut up and calculated, and the calculation output that you should feel 3^^^3 times as bad as over a stubbed toe. And a stubbed toe can be pretty bad.
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
Dan is in error, insofar as his argument implied that one should have one’s emotions conflict with one’s true values.
You, however are in error insofar as your arguments praise feeling bad as a path to doing good.
I agree with you that your emotions should reflect your values. OTOH, I agree with Dan that the optimal choice of emotion to reflect one’s values will rarely be feeling bad, unless there is some sort of social goal involved (such as bonding with a group through a shared experience of grief or outrage).
I’m going to stop going point-for-point on this, and this will probably be my final post on the matter. But the gist of my argument is this:
You say that it’s reasonable to “look away”, to consciously try to disconnect your emotions from reality. This is essentially sacrificing emotional epistemic rationality for emotional instrumental rationality. In that sense, I consider it theoretically reasonable: epistemic rationality is ultimately only a sub-goal of instrumental rationality.
But unless you’re a perfect rationalist, it is extremely dangerous to have a policy of favoring instrumental rationality over epistemic rationality. It’s virtually impossible to lie to yourself in a way that is not contagious. Unless you have complete information about the universe and total knowledge of how to apply it, you can never be sure that the lie you told to cover up one unfortunate truth won’t catch you somewhere else—and when it’s a lie you’ve told yourself, a false thing you’ve willed yourself into believing, you can’t even keep the truth at the back of your mind to make sure you maintain correspondence with reality.
Yours is the logic of conversion, the argument that says you should abandon truth for religion if it seems likely to make you happier. Maybe this is the case—but only if you can be sure that reality will never come back and bite you in the ass. Because once you’ve given up that instinct for truth, you can’t get it back. A lie you tell to yourself is self-reinforcing and can’t be isolated. Most likely you will never be able to dig it out.
If you were perfect, you could entirely disjoin the emotional state you wished to feel from the emotional valuation you wished to decide with—making one conscious and keeping the other deep inside your head. But you’re not perfect, you’re human—and humans can’t do that. One who tries to do so will find that their real, underlying, motivating emotions change to match the ones they consciously desire to feel—and in doing so alter their actions.
So your choice is this: either change your emotions to match reality, with all the suffering that entails; or ignore reality for the sake of your emotions, and sacrifice your moral code in doing so.
Sacrificing epistemology is not something you can do once you’ve awakened as a rationalist.
One thing that you’re overlooking here is that the kind of self-modification Dan is talking about can’t be done unless you actually have strong epistemic rationality with respect to your emotions—strong enough to understand the judgment by which you arrived at the emotions in the first place.
This is a misunderstanding of how emotions work. Our emotions are not synonymous with our values, nor directly derived from them. If they were, we would all be rational, all the time!
Emotions are cached responses to situationally-salient values. Example: I don’t like exercising, but it produces another result I want later. The not-liking-exercise emotion is not actually fulfilling my values: it would be more useful—and more epistemically accurate—for me to experience an emotion in relation to exercise that gives greater weight to my longer-term values. Which of these emotions is epistemically correct?
If our brains actually used our real values in their entirety to arrive at decisions, it’d take too bloody long. So we use cached evaluations based on immediate information… which means our emotions are automatically and systematically biased against our long-term best interests, unless we consciously correct what’s in our caches on an ongoing basis.
So, there is no conflict here between the epistemic and instrumental: removing unnecessary negative emotion is simply correcting systemic biases of the underlying machinery to reflect our true values and desired outcomes, rather than overweighting what is easy to visualize or unconsciously learn.
You have misunderstood my entire point. I know that emotions don’t naturally reflect values. The argument was over whether achieving your values requires you to change your emotions to reflect them, or if you can be equally motivated by values alone.
From the original post:
In other words, you have decided that your emotions need to be realigned to reflect (what your value system says about) the state of the world. DanArmak argued that this is false. I argued that it is generally true.
Dan is in error, insofar as his argument implied that one should have one’s emotions conflict with one’s true values.
You, however are in error insofar as your arguments praise feeling bad as a path to doing good.
I agree with you that your emotions should reflect your values. OTOH, I agree with Dan that the optimal choice of emotion to reflect one’s values will rarely be feeling bad, unless there is some sort of social goal involved (such as bonding with a group through a shared experience of grief or outrage).