I disagree again: I don’t think that any reasonable definition of emotion makes the following statement true: :
emotions allow you to (prior to conscious evaluation) narrow down the field of “all possible hypotheses” to “likely to be true” hypotheses.
I think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
Those are subsets of what you believe to be likely true.
Right, we agree. But I think that we have overused the word emotion… That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a “hypothesis generating part of your cognitive algorithm”. I think and hope that one can separate the two.
That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love
No… the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One’s physio-emotional state at the time of retrieval also has an effect on retrieval priorities… if you’re angry, for example, memories tagged “angry” are prioritized.
I disagree again: I don’t think that any reasonable definition of emotion makes the following statement true: :
I think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
Those are subsets of what you believe to be likely true.
Great! Hurrah for emotions, they make you believe things that you believe are likely to be true…
epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.
And that’s why it’s a good thing to know what you’re up against, with respect to the hardware upon which you’re trying to do that.
Right, we agree. But I think that we have overused the word emotion… That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a “hypothesis generating part of your cognitive algorithm”. I think and hope that one can separate the two.
No… the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One’s physio-emotional state at the time of retrieval also has an effect on retrieval priorities… if you’re angry, for example, memories tagged “angry” are prioritized.