I’m confused about this. Consider these statements:
A. “I believe that my shirt is red.” B. “I value cheese.”
Are you claiming that:
People don’t actually make statements like A
People don’t actually make statements like B
A is expressing the same sort of fact about the world as B
Statements like A and B aren’t completely separate; that is, they can have something to do with one another.
If you strictly mean 1 or 2, I can construct a counterexample. 3 is indeed counterintuitive to me. 4 seems uncontroversial (the putative is/ought problem aside)
If I had to say, it would be a strong version of 4: in conceptspace, people naturally make groupings that put is- and ought-statements together. But looking back at the post, I definitely have quite a bit to clarify.
When I refer to what humans do, I’m trying to look at the general case. Obviously, if you direct someone’s attention to the issue of is/ought, then they can break down thoughts into values and beliefs without much training. However, in the absence of such a deliberate step, I do not think people normally make a distinction.
I’m reminded of the explanation in pjeby’s earlier piece: people instinctively put xml-tags of “good” or “bad” onto different things, blurring the distinction between “X is good” and “Y is a reason to deem X good”. That is why we have to worry about the halo effect, where you disbelieve everything negative about something you value, even if such negatives are woefully insufficient to justify not valuing it.
From the computational perspective, this can be viewed as a shortcut to having to methodically analyze all the positives and negatives of any course of action, and getting stuck thinking instead of acting. But if this is how the mind really works, it’s not really reducible to a CSA, without severe stretching of the meaning.
I’m confused about this. Consider these statements:
A. “I believe that my shirt is red.”
B. “I value cheese.”
Are you claiming that:
People don’t actually make statements like A
People don’t actually make statements like B
A is expressing the same sort of fact about the world as B
Statements like A and B aren’t completely separate; that is, they can have something to do with one another.
If you strictly mean 1 or 2, I can construct a counterexample. 3 is indeed counterintuitive to me. 4 seems uncontroversial (the putative is/ought problem aside)
If I had to say, it would be a strong version of 4: in conceptspace, people naturally make groupings that put is- and ought-statements together. But looking back at the post, I definitely have quite a bit to clarify.
When I refer to what humans do, I’m trying to look at the general case. Obviously, if you direct someone’s attention to the issue of is/ought, then they can break down thoughts into values and beliefs without much training. However, in the absence of such a deliberate step, I do not think people normally make a distinction.
I’m reminded of the explanation in pjeby’s earlier piece: people instinctively put xml-tags of “good” or “bad” onto different things, blurring the distinction between “X is good” and “Y is a reason to deem X good”. That is why we have to worry about the halo effect, where you disbelieve everything negative about something you value, even if such negatives are woefully insufficient to justify not valuing it.
From the computational perspective, this can be viewed as a shortcut to having to methodically analyze all the positives and negatives of any course of action, and getting stuck thinking instead of acting. But if this is how the mind really works, it’s not really reducible to a CSA, without severe stretching of the meaning.