Contrasting “accurate beliefs and useful emotions” with “useful beliefs and accurate emotions” would probably make a good exercise for a novice rationalist.
Beliefs shoulder the burden of having to reflect the territory, while emotions don’t. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn’t seem to be addressed by the Sequences.)
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.”
This is how I have come to think of beliefs. It’s like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I’m perfectly happy to call that “truth”. So long as that does not break my tools.
If useful doesn’t equal accurate then you have biased your map.
The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that “feel” useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.
A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I’m better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.
useful may not be accurate, depending on one’s motives. A ‘useful’ belief may be one that allows you to do what you really want to unburdened by ethical/logistic/moral considerations. e.g., belief that non-europeans aren’t really human permits one to colonise their land without qualms.
I suppose that’s why, as a rationalist, one would prefer accurate beliefs- they don’t give you the liberty of lying to yourself like that. And as a rationalist, accurate beliefs will be far more useful than inaccurate ones.
Good point about beliefs possibly only “feeling” useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it’s often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.
A useful belief is an accurate one. It is, however, easy to believe a belief is useful without testing its veracity. Therefore it is optimal to test for accuracy in beliefs, as opposed to querying one’s belief in its usefulness.
Conversely, why not both accurate beliefs and emotions?
Let useful come into play when choosing your actions. This can include framing your emotions—but if you just go around changing your emotions to whatever’s useful, you’re not being yourself.
“being yourself”:
A metaphor for a feeling which is so far removed from modern language’s ability to describe, that it’s a local impossibility for all but a tiny portion of the people in the world to taboo it.
It’s purpose is to illicit the associated feeling in the listener, and not to be used as a descriptive reference. It is a feeling that is so deeply ingrained in 50% of people, that those people don’t realize the other 50% of people don’t know what it is; and so had never thought to even begin to try to explain it, much less taboo it.
tabooing the word as if it describes an action is an inadequate representation of the true meaning of the word. The same is true of tabooing the word as if it describes an emotion, a thought, a belief, or an identity.
“being yourself” is a conglomeration of two concepts. The first, “being”, requires the assumption that there is such a thing as a “state of being”, as an all-encompassing description of something that describes it’s non-physical properties as a snapshot of a single moment; and that said description is unlikely to change over time. The second, “oneself”, requires the assumption that there is such a thing as a spark of consciousness at the source of any mental processes, or related, of any living creature. This concept is reminiscent of the concept of a “soul”.
I personally find the concept of “being oneself” to be of the fallacious origin of the assumption that the spark of consciousness is separate from the current state of being, and that said state and spark do not flux and change continuously.
However, the context of the phrase “being yourself”, in this instance, requires not that this phrase be tabooed, but instead that “changing your emotions” be tabooed, along with “useful”.
The question in regards to “changing your emotions” is if the author meant that truly changing one’s emotions would be “not being oneself”; or if the author meant something else, such as putting on a facade of an emotion that one is not experiencing is “not being oneself”.
“Useful” is a word that has different definitions for many people, and often changes based on context.
The comment in question is likely a misunderstanding of what is meant by the word “useful”. This implies the possibility that many people have misunderstood what is meant by the word “useful”, perhaps even including the original poster of the quote.
So, the useful thing to do would not be to taboo “being yourself”, but to instead taboo “useful”.
In my case, I am using “useful” to mean an action which produces a generalized and averaged value for all involved and all observers. In this case, I consider the “value” in question to be an increase in communication ability for all posters, and a general increase in all readers’ ability to progress their own mental abilities. I could taboo further, but I don’t see any proportionally significant value in doing so.
It’s perhaps worth noting that EY seems to have taken instead the “accurate beliefs and accurate emotions” tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what’s implied.
I mean, I suspect “accurate beliefs and useful emotions” really is the way to go; but this is something that—if it really is a sort of consensus here—we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that’s explicity (I’m going from memory in making that statement).
Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your “lens” (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.
I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.
My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it’s not as if I can respond to other people’s thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.
All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.
I agree you can’t respond to others’ thoughts, unless they express them such that they are “behaviors.” Interestingly, the “problem” you have with the sounds or images (or words?) which purport to be correlated to others’ thoughts is the same exact issue everyone is having with you (or me).
if we’re confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others’ expressions because of that very same issue?
I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli.
isn’t this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.
But it’s not as if I can respond to other people’s thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.
the second two paragraphs above are responding to this. sorry to throw it back at you, but perhaps i’m misunderstanding the point you were trying to make here? I thought you were questioning the value of considering/responding to others’ thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their “true” state of mind.
isn’t this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.
Let me make some more precise definitions: by “emotional responses to my thoughts” I mean “what I feel when I think a given thought,” e.g. I feel a mild negative emotion when I think about calling people. By “emotional responses to my behavior” I mean “what I feel when I perform a given action,” e.g. I feel a mild negative emotion when I call people. By “emotional responses to external stimuli” I mean “what I feel when a given thing happens in the world around me,” e.g. I feel a mild negative emotion when people call me. The distinction I’m trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.
I thought you were questioning the value of considering/responding to others’ thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their “true” state of mind.
No, I’m just making the point that for the purposes of classifying different kinds of emotion-hacking I don’t find it useful to have a category for other people’s thoughts separate from other people’s behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don’t have direct access to other people’s thoughts.
Going back to the original comment i commented on:
emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
Particularly with your third type of emotion hacking (“hacking your emotional responses to external stimuli”), it seems emotion hacking is vital for for epistemic rationality—i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.
I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person’s continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I’d trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.
From a participant at the January CFAR workshop. I don’t remember who. This struck me as an excellent description of what rationalists seek.
People often seem to get these mixed up, resulting in “You want useful beliefs and accurate emotions.”
Not sure what an “accurate emotion” would mean, feel like some sort of domain error. (e.g. a blue sound.)
An accurate emotion = “I’m angry because I should be angry because she is being really, really mean to me.”
A useful emotion = “Showing empathy towards someone being mean to me will minimize the cost to me of others’ hostility.”
Where’s that ‘should’ coming from? (Or are you just explaining the concept rather than endorsing it?)
I mean in the way most (non-LW) people would interpret it, so explaining not endorsing.
Contrasting “accurate beliefs and useful emotions” with “useful beliefs and accurate emotions” would probably make a good exercise for a novice rationalist.
Why not both useful beliefs and useful emotions?
Why privilege beliefs?
This is addressed by several Sequence posts, e.g. Why truth? And..., Dark Side Epistemology, and Focus Your Uncertainty.
Beliefs shoulder the burden of having to reflect the territory, while emotions don’t. (Although many people seem to have beliefs that could be secretly encoding heuristics that, if they thought about it, they could just be executing anyway, e.g. believing that people are nice could be secretly encoding a heuristic to be nice to people, which you could just do anyway. This is one kind of not-really-anticipation-controlling belief that doesn’t seem to be addressed by the Sequences.)
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.”
This is how I have come to think of beliefs. It’s like refactoring code. You should do it when you spot regularities you can eke efficiency out of. But you should do this only if it does not make the code unwieldy or unnatural, and only if it does not make the code fragile. Beliefs should be the same thing. When your rules of thumb seem to respect some regularity in reality, I’m perfectly happy to call that “truth”. So long as that does not break my tools.
“Beliefs shoulder the burden of having to reflect the territory, while emotions don’t.” Superb point that. And thanks for the links.
If useful doesn’t equal accurate then you have biased your map.
The most useful beliefs to have are almost always accurate ones so in almost all situations useful=accurate. But most people have an innate desire to bias their map in a way that harms them over the long-run. Restated, most people have harmful emotional urges that do their damage by causing them to have inaccurate maps that “feel” useful but really are not. Drilling into yourself the value of having an accurate map in part by changing your emotions to make accuracy a short-term emotional urge will cause you to ultimately have more useful beliefs than if you have the short-term emotional urge of having useful beliefs.
A Bayesian super-intelligence could go for both useful beliefs and emotions. But given the limitations of the human brain I’m better off programming the emotional part of mine to look for accuracy in beliefs rather than usefulness.
useful may not be accurate, depending on one’s motives. A ‘useful’ belief may be one that allows you to do what you really want to unburdened by ethical/logistic/moral considerations. e.g., belief that non-europeans aren’t really human permits one to colonise their land without qualms.
I suppose that’s why, as a rationalist, one would prefer accurate beliefs- they don’t give you the liberty of lying to yourself like that. And as a rationalist, accurate beliefs will be far more useful than inaccurate ones.
Good point about beliefs possibly only “feeling” useful. But that applies to accuracy as well. Privileging accuracy can also lead you to overstate its usefulness. In fact, I find it’s often better to not even have beliefs at all. Rather than trying to contort my beliefs to be useful, a bunch of non map-based heuristics gets the job done handily. Remember, the map-territory distinction is itself but a useful meta-heuristic.
A useful belief is an accurate one. It is, however, easy to believe a belief is useful without testing its veracity. Therefore it is optimal to test for accuracy in beliefs, as opposed to querying one’s belief in its usefulness.
Conversely, why not both accurate beliefs and emotions?
Let useful come into play when choosing your actions. This can include framing your emotions—but if you just go around changing your emotions to whatever’s useful, you’re not being yourself.
Taboo “being yourself”.
“being yourself”: A metaphor for a feeling which is so far removed from modern language’s ability to describe, that it’s a local impossibility for all but a tiny portion of the people in the world to taboo it. It’s purpose is to illicit the associated feeling in the listener, and not to be used as a descriptive reference. It is a feeling that is so deeply ingrained in 50% of people, that those people don’t realize the other 50% of people don’t know what it is; and so had never thought to even begin to try to explain it, much less taboo it.
tabooing the word as if it describes an action is an inadequate representation of the true meaning of the word. The same is true of tabooing the word as if it describes an emotion, a thought, a belief, or an identity.
“being yourself” is a conglomeration of two concepts. The first, “being”, requires the assumption that there is such a thing as a “state of being”, as an all-encompassing description of something that describes it’s non-physical properties as a snapshot of a single moment; and that said description is unlikely to change over time. The second, “oneself”, requires the assumption that there is such a thing as a spark of consciousness at the source of any mental processes, or related, of any living creature. This concept is reminiscent of the concept of a “soul”.
I personally find the concept of “being oneself” to be of the fallacious origin of the assumption that the spark of consciousness is separate from the current state of being, and that said state and spark do not flux and change continuously.
However, the context of the phrase “being yourself”, in this instance, requires not that this phrase be tabooed, but instead that “changing your emotions” be tabooed, along with “useful”. The question in regards to “changing your emotions” is if the author meant that truly changing one’s emotions would be “not being oneself”; or if the author meant something else, such as putting on a facade of an emotion that one is not experiencing is “not being oneself”.
“Useful” is a word that has different definitions for many people, and often changes based on context. The comment in question is likely a misunderstanding of what is meant by the word “useful”. This implies the possibility that many people have misunderstood what is meant by the word “useful”, perhaps even including the original poster of the quote.
So, the useful thing to do would not be to taboo “being yourself”, but to instead taboo “useful”.
In my case, I am using “useful” to mean an action which produces a generalized and averaged value for all involved and all observers. In this case, I consider the “value” in question to be an increase in communication ability for all posters, and a general increase in all readers’ ability to progress their own mental abilities. I could taboo further, but I don’t see any proportionally significant value in doing so.
Attempting to override your utility function. Effectively, a stab at wetware wireheading.
It’s perhaps worth noting that EY seems to have taken instead the “accurate beliefs and accurate emotions” tack in e.g. The Twelve Virtues of Rationality. Or at least that seems to be what’s implied.
I mean, I suspect “accurate beliefs and useful emotions” really is the way to go; but this is something that—if it really is a sort of consensus here—we need to be much more explicit about, IMO. At the moment there seems to be little about that in the sequences / core articles, or at least little about it that’s explicity (I’m going from memory in making that statement).
Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your “lens” (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.
I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.
whose thoughts and whose behaviors? not disagreeing, just asking.
My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it’s not as if I can respond to other people’s thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.
All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.
I agree you can’t respond to others’ thoughts, unless they express them such that they are “behaviors.” Interestingly, the “problem” you have with the sounds or images (or words?) which purport to be correlated to others’ thoughts is the same exact issue everyone is having with you (or me).
if we’re confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others’ expressions because of that very same issue?
I don’t understand what point you’re trying to make.
isn’t this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.
the second two paragraphs above are responding to this. sorry to throw it back at you, but perhaps i’m misunderstanding the point you were trying to make here? I thought you were questioning the value of considering/responding to others’ thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their “true” state of mind.
Let me make some more precise definitions: by “emotional responses to my thoughts” I mean “what I feel when I think a given thought,” e.g. I feel a mild negative emotion when I think about calling people. By “emotional responses to my behavior” I mean “what I feel when I perform a given action,” e.g. I feel a mild negative emotion when I call people. By “emotional responses to external stimuli” I mean “what I feel when a given thing happens in the world around me,” e.g. I feel a mild negative emotion when people call me. The distinction I’m trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.
No, I’m just making the point that for the purposes of classifying different kinds of emotion-hacking I don’t find it useful to have a category for other people’s thoughts separate from other people’s behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don’t have direct access to other people’s thoughts.
What problem?
Thanks for the clarification, now i understand.
Going back to the original comment i commented on:
Particularly with your third type of emotion hacking (“hacking your emotional responses to external stimuli”), it seems emotion hacking is vital for for epistemic rationality—i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.
I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
Can you clarify what you mean by this?
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
I agree that this is problematic but don’t see what it has to do with what I’ve been saying.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
I’m happy to agree that emotion hacking is important to epistemic rationality.
ok, wasn’t trying to play “gotcha,” just answering your question. good chat, thanks for engaging with me.
Indeed, accurate emotions appear a better description. Consider killing someone might free up many opportunities, and would only have the consequence of bettering many lives; the useful emotion would be happiness at the opportunity to forever end that person’s continued generation and spread of negative utility. Regardless of whether the accurate emotion might yield the same result, I’d trust the decisions of they who emote accurately, for though I know not whither hacking for emotional usefulness leads, a change of values for the disutility of others I strongly suspect.