isn’t this the ONLY kind of emotion-hacking out there? what emotions are expressed irrespective of external stimuli? seems like a small or insignificant subset.
Let me make some more precise definitions: by “emotional responses to my thoughts” I mean “what I feel when I think a given thought,” e.g. I feel a mild negative emotion when I think about calling people. By “emotional responses to my behavior” I mean “what I feel when I perform a given action,” e.g. I feel a mild negative emotion when I call people. By “emotional responses to external stimuli” I mean “what I feel when a given thing happens in the world around me,” e.g. I feel a mild negative emotion when people call me. The distinction I’m trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.
I thought you were questioning the value of considering/responding to others’ thoughts, because you are arguing that even if you could, you would need to rely on their words and expressions, which may not be correlated with their “true” state of mind.
No, I’m just making the point that for the purposes of classifying different kinds of emotion-hacking I don’t find it useful to have a category for other people’s thoughts separate from other people’s behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don’t have direct access to other people’s thoughts.
Going back to the original comment i commented on:
emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).
Particularly with your third type of emotion hacking (“hacking your emotional responses to external stimuli”), it seems emotion hacking is vital for for epistemic rationality—i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.
I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
Let me make some more precise definitions: by “emotional responses to my thoughts” I mean “what I feel when I think a given thought,” e.g. I feel a mild negative emotion when I think about calling people. By “emotional responses to my behavior” I mean “what I feel when I perform a given action,” e.g. I feel a mild negative emotion when I call people. By “emotional responses to external stimuli” I mean “what I feel when a given thing happens in the world around me,” e.g. I feel a mild negative emotion when people call me. The distinction I’m trying to make between my behavior and external stimuli is analogous to the distinction between operant and classical conditioning.
No, I’m just making the point that for the purposes of classifying different kinds of emotion-hacking I don’t find it useful to have a category for other people’s thoughts separate from other people’s behaviors (in contrast to how I find it useful to have a category for my thoughts separate from my behaviors), and the reason is that I don’t have direct access to other people’s thoughts.
What problem?
Thanks for the clarification, now i understand.
Going back to the original comment i commented on:
Particularly with your third type of emotion hacking (“hacking your emotional responses to external stimuli”), it seems emotion hacking is vital for for epistemic rationality—i guess that relates to my original point, that hacking emotions are at least as important for epistemic rationality as hacking emotions for instrumental rationality.
I raised the issue originally because I worry that rationality, to the extent it must value subjective considerations, tends to minimize the importance of those considerations to yield a more clear inquiry.
Can you clarify what you mean by this?
sure. note that i don’t offer this as conclusive or correct, but just something i’m thinking about. also, lets assume rational choice theory is universally applicable for decision making.
rational choice theory gives you an equation to use and all we have to do is fill that equation with the proper inputs, value them correctly, and you get an answer. Obviously this is more difficult in practice, particularly where inputs (as to be expected) are not easily convertible to probabilities/numbers—I’m worried this is actually more problematic than we think. Once we have an objective equation as a tool, we may be biased to assume objectivity and truth regarding our answers, even though that belief often is based on the strength of the starting equation and not on our ability to accurately value and include the appropriate subjective factors. To the extent answering a question becomes difficult, we manufacture “certainty” by ignoring subjectivity or assuming it is not as relevant as it is.
Simply put, the belief we have a good and objective starting point biases us to believe we also can/will/actually derive an objectively correct answer, affecting the accuracy with which we fill in the equation.
I agree that this is problematic but don’t see what it has to do with what I’ve been saying.
you suggested that emotion hacking is more of an issue for instrumental rationality and not so much for epistemic rationality. to the extent that is wrong, you’re ignoring emotion hacking (subjective factor) from your application of epistemic rationality.
I’m happy to agree that emotion hacking is important to epistemic rationality.
ok, wasn’t trying to play “gotcha,” just answering your question. good chat, thanks for engaging with me.