We Need To Explain Why Humans Differentiate Goals and Beliefs, Not Just Why We Conflate Them
You mention that good/bad seem like natural categories. I agree that people often seem to mix up “should” and “probably is”, “good” and “normal”, “bad” and “weird”, etc. These observations in themselves speak in favor of the minimize-prediction-error theory of values.
However, we also differentiate these concepts at other times. Why is that? Is it some kind of mistake? Or is the conflation of the two the mistake?
I think the mix-up between the two is partly explained by the effect I mentioned earlier: common practice is optimized to be good, so there will be a tendency for commonality and goodness to correlate. So, it’s sensible to cluster them together mentally, which can result in them getting confused. There’s likely another aspect as well, which has something to do with social enforcement (ie, people are strategically conflating the two some of the time?) -- but I’m not sure exactly how that works.
This seems like an important question: if all these phenomena really are ultimately the same thing and powered by the same mechanisms, why do we make distinctions between them and find those distinctions useful?
I don’t have an answer I’m satisfied with, but I’ll try to say a few words about what I’m thinking and see if that moves us along.
My first approximation would be that we’re looking at things that we experience by different means and so give them different names because when we observe them they present in different ways. Goals (I assume by this you mean the cluster of things we might call desires, aversions, and generally intention towards action) probably tend to be observed by noticing the generation of signals going out that usually generate observable actions (movement, speech, etc.) whereas beliefs (the cluster of things that includes thoughts and maybe emotions) are internal and not sending out signals to action beyond mental action.
I don’t know enough to be very confident in that, though, and think like you that it could be due to numerous reasons why it might make sense to think of them as separate even if they are fundamentally not very different.
On my understanding of how things work, goals and beliefs combine to make action, so neither one is really mentally closer to action than the other. Both a goal and a belief can be quite far removed from action (eg, a nearly impossible goal which you don’t act on, or a belief about far-away things which don’t influence your day-to-day). Both can be very close (a jump scare seems most closely connected to a belief, whereas deciding to move your hand and then doing so is more goal-like—granted both those examples have complications).
If, in conversation, the distinction comes up explicitly, it is usually because of stuff like this:
Alice makes an unclear statement; it sounds like she could be claiming A or wanting A.
Bob asks for clarification, because Bob’s reaction to believing A is true would be very different from his reaction to believing A is good (or, in more relative terms, knowing Alice endorses one or the other of those). In the first case, Bob might plan under the assumption A; in the second, Bob might make plans designed to make A true.
Alice is engaging in wishful thinking, claiming that something is true when really the opposite is just too terrible to consider.
Bob wants to be able to rely on Alice’s assertions, so Bob is concerned about the possibility of wishful thinking.
Or, Bob is concerned for Alice; Bob doesn’t want Alice to ignore risks due to ignoring negative possibilities, or fail to set up back-up plans for the bad scenarios.
My point is that it doesn’t seem to me like a case of people intuitively breaking up a thing which is scientifically really one phenomena. Predicting A and wanting A seem to have quite different consequences. If you predict A, you tend to restrict attention to cases where it is true when planning; you may plan actions which rely on it. If you want A, you don’t do that; you are very aware of all the cases where not-A. You take actions designed to ensure A.
This seems like an important question: if all these phenomena really are ultimately the same thing and powered by the same mechanisms, why do we make distinctions between them and find those distinctions useful?
I don’t have an answer I’m satisfied with, but I’ll try to say a few words about what I’m thinking and see if that moves us along.
My first approximation would be that we’re looking at things that we experience by different means and so give them different names because when we observe them they present in different ways. Goals (I assume by this you mean the cluster of things we might call desires, aversions, and generally intention towards action) probably tend to be observed by noticing the generation of signals going out that usually generate observable actions (movement, speech, etc.) whereas beliefs (the cluster of things that includes thoughts and maybe emotions) are internal and not sending out signals to action beyond mental action.
I don’t know enough to be very confident in that, though, and think like you that it could be due to numerous reasons why it might make sense to think of them as separate even if they are fundamentally not very different.
On my understanding of how things work, goals and beliefs combine to make action, so neither one is really mentally closer to action than the other. Both a goal and a belief can be quite far removed from action (eg, a nearly impossible goal which you don’t act on, or a belief about far-away things which don’t influence your day-to-day). Both can be very close (a jump scare seems most closely connected to a belief, whereas deciding to move your hand and then doing so is more goal-like—granted both those examples have complications).
If, in conversation, the distinction comes up explicitly, it is usually because of stuff like this:
Alice makes an unclear statement; it sounds like she could be claiming A or wanting A.
Bob asks for clarification, because Bob’s reaction to believing A is true would be very different from his reaction to believing A is good (or, in more relative terms, knowing Alice endorses one or the other of those). In the first case, Bob might plan under the assumption A; in the second, Bob might make plans designed to make A true.
Alice is engaging in wishful thinking, claiming that something is true when really the opposite is just too terrible to consider.
Bob wants to be able to rely on Alice’s assertions, so Bob is concerned about the possibility of wishful thinking.
Or, Bob is concerned for Alice; Bob doesn’t want Alice to ignore risks due to ignoring negative possibilities, or fail to set up back-up plans for the bad scenarios.
My point is that it doesn’t seem to me like a case of people intuitively breaking up a thing which is scientifically really one phenomena. Predicting A and wanting A seem to have quite different consequences. If you predict A, you tend to restrict attention to cases where it is true when planning; you may plan actions which rely on it. If you want A, you don’t do that; you are very aware of all the cases where not-A. You take actions designed to ensure A.