Would you like to discuss a stronger claim, that motivated cognition may be a good epistemology?
Usually people use “logical reasoning + facts”. Maybe we can use “motivated reasoning + facts”. I.e. seek a balance between desirability and plausibility of a hypothesis.
I would say that of course motivated reasoning can lead to good epistemology since my claim is that all epistemology is done at the behest of some motivation, good being relative here to some motivation. :-)
For example, it’s quite reasonable to pick a norm like logic or Bayesian rationality and expect reasoning to conform to it in order to produce knowledge of a type that is useful, say to the purpose of predicting what the world will be like in future moments.
Sorry, I meant using motivated cognition as a norm itself. Using motivated cognition for evaluating hypotheses. I.e. I mean what people usually mean by motivated cognition, “you believe in this (hypothesis) because it sounds nice”.
Here’s why I think that motivated cognition (MC) is more epistemically interesting/plausible than people think:
When you’re solving a problem A, it may be useful to imagine the perfect solution. But in order to imagine the perfect solution for the problem A you may need to imagine such solutions for the problems B, C, D etc. … if you never evaluate facts and hypotheses emotionally, you may not even be able to imagine what the “perfect solution” is.
MC may be a challenge: often it’s not obvious what’s the best possibility is. And the best possibilities may look weird.
Usual arguments against MC (e.g. “the universe doesn’t care about your feelings”, “you should base your opinions on your knowledge about the universe”) may be wrong. Because feelings may be based on the knowledge about reality.
Modeling people (even rationalists) as using different types of MC may simplify their arguments and opinions.
MC in the form of ideological reasoning is, in a way, the only epistemology known to us. Bayesianism is cool, but on some important level of reality it’s not really an epistemology (in my opinion), i.e. it’s hard/impossible to use and it doesn’t actually model thinking and argumentation.
If you want we can discuss those or other points in more detail.
I wrote a post about motivated cognition in epistemology, a version of “the problem of the criterion” and (a bit) about different theories of truth. If you want, I would be happy to discuss some of it with you.
Would you like to discuss a stronger claim, that motivated cognition may be a good epistemology?
Usually people use “logical reasoning + facts”. Maybe we can use “motivated reasoning + facts”. I.e. seek a balance between desirability and plausibility of a hypothesis.
I would say that of course motivated reasoning can lead to good epistemology since my claim is that all epistemology is done at the behest of some motivation, good being relative here to some motivation. :-)
For example, it’s quite reasonable to pick a norm like logic or Bayesian rationality and expect reasoning to conform to it in order to produce knowledge of a type that is useful, say to the purpose of predicting what the world will be like in future moments.
Sorry, I meant using motivated cognition as a norm itself. Using motivated cognition for evaluating hypotheses. I.e. I mean what people usually mean by motivated cognition, “you believe in this (hypothesis) because it sounds nice”.
Here’s why I think that motivated cognition (MC) is more epistemically interesting/plausible than people think:
When you’re solving a problem A, it may be useful to imagine the perfect solution. But in order to imagine the perfect solution for the problem A you may need to imagine such solutions for the problems B, C, D etc. … if you never evaluate facts and hypotheses emotionally, you may not even be able to imagine what the “perfect solution” is.
MC may be a challenge: often it’s not obvious what’s the best possibility is. And the best possibilities may look weird.
Usual arguments against MC (e.g. “the universe doesn’t care about your feelings”, “you should base your opinions on your knowledge about the universe”) may be wrong. Because feelings may be based on the knowledge about reality.
Modeling people (even rationalists) as using different types of MC may simplify their arguments and opinions.
MC in the form of ideological reasoning is, in a way, the only epistemology known to us. Bayesianism is cool, but on some important level of reality it’s not really an epistemology (in my opinion), i.e. it’s hard/impossible to use and it doesn’t actually model thinking and argumentation.
If you want we can discuss those or other points in more detail.
I wrote a post about motivated cognition in epistemology, a version of “the problem of the criterion” and (a bit) about different theories of truth. If you want, I would be happy to discuss some of it with you.