I dispute your premise: what makes you so sure people do decompose their thoughts into beliefs and values, and find these to be natural, distinct categories? Consider the politics as mind-killer phenomenon. That can be expressed as, “People put your words into a broader context of whether they threaten their interests, and argue for or against your statements on that basis.”
For example, consider the difficulty you will have communicating your position if you believe both a) global warming is unlikely to cause any significant problems in the business-as-usual scenario, b) high taxes on CO2 emissions should be levied. (e.g., you believe it’s a good idea as an insurance policy and can be done in a way that blocks most of the economic damage)
(Yes, I had to use a present example to make the reactions easier to imagine.)
The “ought” is so tightly coupled to the “is”, that in any case where the “ought” actually matters, the “is” comes along for the ride.
Note: this is related to the problem I had with the exposition of could/would/should agents: if you say humans are CSAs, what’s an example of an intelligent agent that isn’t?
I’m confused about this. Consider these statements:
A. “I believe that my shirt is red.” B. “I value cheese.”
Are you claiming that:
People don’t actually make statements like A
People don’t actually make statements like B
A is expressing the same sort of fact about the world as B
Statements like A and B aren’t completely separate; that is, they can have something to do with one another.
If you strictly mean 1 or 2, I can construct a counterexample. 3 is indeed counterintuitive to me. 4 seems uncontroversial (the putative is/ought problem aside)
If I had to say, it would be a strong version of 4: in conceptspace, people naturally make groupings that put is- and ought-statements together. But looking back at the post, I definitely have quite a bit to clarify.
When I refer to what humans do, I’m trying to look at the general case. Obviously, if you direct someone’s attention to the issue of is/ought, then they can break down thoughts into values and beliefs without much training. However, in the absence of such a deliberate step, I do not think people normally make a distinction.
I’m reminded of the explanation in pjeby’s earlier piece: people instinctively put xml-tags of “good” or “bad” onto different things, blurring the distinction between “X is good” and “Y is a reason to deem X good”. That is why we have to worry about the halo effect, where you disbelieve everything negative about something you value, even if such negatives are woefully insufficient to justify not valuing it.
From the computational perspective, this can be viewed as a shortcut to having to methodically analyze all the positives and negatives of any course of action, and getting stuck thinking instead of acting. But if this is how the mind really works, it’s not really reducible to a CSA, without severe stretching of the meaning.
Seconded. Sometimes I don’t even feel I have fully separate beliefs and values. For instance, I’m often willing to change my beliefs to achieve my values (e.g., by believing something I have no evidence for, to become friends with other people who believe it—and yes, ungrounded beliefs can be adopted voluntarily to an extent.)
ungrounded beliefs can be adopted voluntarily to an extent.
I cannot do this, and I don’t understand anyone who can. If you consciously say “OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it”, then you already disbelieve X.
I already disbelieve X, true, but I can change that. Of course it doesn’t happen in a moment :-)
Yes, you can’t create that feeling of rational knowledge about X from nothing. But if you can retreat from rationality—to where most people live their lives—and if you repeat X often enough, and you have no strongly emotional reason not to believe X, and your family and peers and role models all profess X, and X behaves like a good in-group distinguishing mark—then I think you have a good chance of coming to believe X. The kind of belief associated with faith and sports team fandom.
It’s a little like the recent thread where someone, I forget who, described an (edit: hypothetical) religious guy who when drunk confessed that he didn’t really believe in god and was only acting religious for the social benefits. Then people argued that no “really” religious person would honestly say that, and other people argued that even if he said that what does it mean if he honestly denies it whenever he’s sober?
In the end I subscribe to the “PR consciousness” theory that says consciousness functions to create and project a self-image that we want others to believe in. We consciously believe many things about ourselves that are completely at odds with how we actually behave and the goals we actually seek. So it would be surprising if we couldn’t invoke these mechanisms in at least some circumstances.
someone, I forget who, described a religious guy who when drunk confessed that he didn’t really believe in god and was only acting religious for the social benefits.
When I wrote that I was aware that it was a fictional account deliberately made up to illustrate a point. I didn’t mention that, though, so I created fictional evidence. Thanks for flagging this, and I should be more careful!
Of course, irony being what it is, people will now flag the Alicorn—MIT reference as nonfictional, and be referring to Alicorn’s MIT example for the rest of LW history :)
Attempting to analyze my own stupidity, I suspect my confusion came from (1) both Alicorn and Yvain being both high-karma contributors and (2) Alicorn’s handle coming more readily to mind, both because (a) I interacted more with her and (b) the pronunciation of “Alicorn” being more obvious than that of “Yvain”.
In other words, I have no evidence that this was anything other than an ordinary mistake.
I’ve been imagining “Yvain” to be pronounced “ee-vane”. I’d be interested in hearing a correction straight from the ee-vane’s mouth if this is not right, though ;) I’ve heard people mispronounce “Alicorn” on multiple occasions.
No, it’s not a real name (as far as I know). It’s a real word. It means a unicorn’s horn, although there are some modern misuses mostly spearheaded by Piers Anthony (gag hack cough).
I’ve been saying “al-eh-corn” in my mental consciousness. Also “ee-vane”, which suggests my problem being less “Yvain is hard to pronounce” than “Yvain doesn’t look like the English I grew up speaking”.
Incidentally, I can’t remember how to pronounce Eliezer. I saw him say it at the beginning of a Bloggingheads video and it was completely different from my naive reading.
“Alicorn” is pronounced just like “unicorn”, except that the “yoon” is replaced with “al” as in “Albert” or “Alabama”. So the I is an “ih”, not an “eh”, but you can get away with an undifferentiated schwa.
I prefer not to gamble, but just to satisfy my own curiosity: what would the controls be on such a bet? Presumably you would have to prove to Knight’s satisfaction that your unbelieving belief-signaler was legitimately thus.
Okay, I found what I think you’re referring to. Probably not my greatest moment here, but Is that really something you want sympathy for? Here’s the short version of what happened.
You: If you think your comment was so important, don’t leave it buried deep in the discussion, where nobody can see it.
Me: But I also linked to it from a more visible place. Did you not know about that?
You: [Ignoring previous mischaracterization] Well, that doesn’t solve the problem of context. I clicked on it and couldn’t understand it, and it seemed boring.
Me: Wait, you claim to be interested in a solution, I post a link saying I have one, and it’s too much of a bother to read previous comments for context? That doesn’t make sense. Your previous comment implies you didn’t know about the higher link. Don’t dig yourseelf deeper by covering it up.
Would you mind elaborating on your take on that thread? What’s of most interest to me is what you think I meant, but I’m also interested in whether you’d say that Silas called Zack a liar.
I’m also interested in whether you’d say that Silas called Zack a liar.
Let’s go back a few steps. You said that in your “last few interactions” with me, I called you a liar. You later clarified that you were thinking of this discussion. But I didn’t deny calling Zack a liar in that discussion; I denied calling you a liar. So why are you suddenly acting like your original claim was about whether I called Zack a liar?
(In any case, it wasn’t just “Zack, you liar”. My remark was more like, “this is what you claimed, this is why it’s implausible, this is why your comments are hindering the discussion, please stop making this so difficult by coming up with ever-more-convoluted stories.”)
Are you and Zack the same person?
Considering that the earlier discussion was about whether you can arbitrarily redefine yourself as a different person, maybe Zack/Douglas are just taking the whole idea a little too seriously! :-P
(And in a show of further irony, that would be just the kind of subtle point that Zack and [?] Douglas, severely overestimating its obviousness, were defending in the thread!)
I apologize to third parties for the poor timing of my deletion of the above comment. It was really addressed to wedrifid and broadcasting it was petty, though not as petty as the excerpt looks.
And I believe that, even taking into account any previous mistrust I might have had of you, I think my evidence is still strong enough that I can trust you consider it conclusive.
I dispute your premise: what makes you so sure people do decompose their thoughts into beliefs and values, and find these to be natural, distinct categories? Consider the politics as mind-killer phenomenon. That can be expressed as, “People put your words into a broader context of whether they threaten their interests, and argue for or against your statements on that basis.”
For example, consider the difficulty you will have communicating your position if you believe both a) global warming is unlikely to cause any significant problems in the business-as-usual scenario, b) high taxes on CO2 emissions should be levied. (e.g., you believe it’s a good idea as an insurance policy and can be done in a way that blocks most of the economic damage)
(Yes, I had to use a present example to make the reactions easier to imagine.)
The “ought” is so tightly coupled to the “is”, that in any case where the “ought” actually matters, the “is” comes along for the ride.
Note: this is related to the problem I had with the exposition of could/would/should agents: if you say humans are CSAs, what’s an example of an intelligent agent that isn’t?
I’m confused about this. Consider these statements:
A. “I believe that my shirt is red.”
B. “I value cheese.”
Are you claiming that:
People don’t actually make statements like A
People don’t actually make statements like B
A is expressing the same sort of fact about the world as B
Statements like A and B aren’t completely separate; that is, they can have something to do with one another.
If you strictly mean 1 or 2, I can construct a counterexample. 3 is indeed counterintuitive to me. 4 seems uncontroversial (the putative is/ought problem aside)
If I had to say, it would be a strong version of 4: in conceptspace, people naturally make groupings that put is- and ought-statements together. But looking back at the post, I definitely have quite a bit to clarify.
When I refer to what humans do, I’m trying to look at the general case. Obviously, if you direct someone’s attention to the issue of is/ought, then they can break down thoughts into values and beliefs without much training. However, in the absence of such a deliberate step, I do not think people normally make a distinction.
I’m reminded of the explanation in pjeby’s earlier piece: people instinctively put xml-tags of “good” or “bad” onto different things, blurring the distinction between “X is good” and “Y is a reason to deem X good”. That is why we have to worry about the halo effect, where you disbelieve everything negative about something you value, even if such negatives are woefully insufficient to justify not valuing it.
From the computational perspective, this can be viewed as a shortcut to having to methodically analyze all the positives and negatives of any course of action, and getting stuck thinking instead of acting. But if this is how the mind really works, it’s not really reducible to a CSA, without severe stretching of the meaning.
Seconded. Sometimes I don’t even feel I have fully separate beliefs and values. For instance, I’m often willing to change my beliefs to achieve my values (e.g., by believing something I have no evidence for, to become friends with other people who believe it—and yes, ungrounded beliefs can be adopted voluntarily to an extent.)
I cannot do this, and I don’t understand anyone who can. If you consciously say “OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it”, then you already disbelieve X.
I already disbelieve X, true, but I can change that. Of course it doesn’t happen in a moment :-)
Yes, you can’t create that feeling of rational knowledge about X from nothing. But if you can retreat from rationality—to where most people live their lives—and if you repeat X often enough, and you have no strongly emotional reason not to believe X, and your family and peers and role models all profess X, and X behaves like a good in-group distinguishing mark—then I think you have a good chance of coming to believe X. The kind of belief associated with faith and sports team fandom.
It’s a little like the recent thread where someone, I forget who, described an (edit: hypothetical) religious guy who when drunk confessed that he didn’t really believe in god and was only acting religious for the social benefits. Then people argued that no “really” religious person would honestly say that, and other people argued that even if he said that what does it mean if he honestly denies it whenever he’s sober?
In the end I subscribe to the “PR consciousness” theory that says consciousness functions to create and project a self-image that we want others to believe in. We consciously believe many things about ourselves that are completely at odds with how we actually behave and the goals we actually seek. So it would be surprising if we couldn’t invoke these mechanisms in at least some circumstances.
generalizing from fictional evidence
When I wrote that I was aware that it was a fictional account deliberately made up to illustrate a point. I didn’t mention that, though, so I created fictional evidence. Thanks for flagging this, and I should be more careful!
Worse: fictional evidence flagged as nonfictional—like Alicorn’s fictional MIT classmates that time.
My what now? I think that was someone else. I don’t think I’ve been associated with MIT till now.
MIT not only didn’t accept me when I applied, they didn’t even reject me. I never heard back from them yea or nay at all.
That was me.
Of course, irony being what it is, people will now flag the Alicorn—MIT reference as nonfictional, and be referring to Alicorn’s MIT example for the rest of LW history :)
Attempting to analyze my own stupidity, I suspect my confusion came from (1) both Alicorn and Yvain being both high-karma contributors and (2) Alicorn’s handle coming more readily to mind, both because (a) I interacted more with her and (b) the pronunciation of “Alicorn” being more obvious than that of “Yvain”.
In other words, I have no evidence that this was anything other than an ordinary mistake.
I’ve been imagining “Yvain” to be pronounced “ee-vane”. I’d be interested in hearing a correction straight from the ee-vane’s mouth if this is not right, though ;) I’ve heard people mispronounce “Alicorn” on multiple occasions.
You mean Alicorn is a real name? I had assumed a combination of Alison and Unicorn, with symbolic implications beyond my ken.
“Ye-vane” here, with the caveat that I was quite confident that it was way off.
No, it’s not a real name (as far as I know). It’s a real word. It means a unicorn’s horn, although there are some modern misuses mostly spearheaded by Piers Anthony (gag hack cough).
Ahh. And I’ve been going about calling them well, unicorn horns all these years!
I’ve been saying “al-eh-corn” in my mental consciousness. Also “ee-vane”, which suggests my problem being less “Yvain is hard to pronounce” than “Yvain doesn’t look like the English I grew up speaking”.
Incidentally, I can’t remember how to pronounce Eliezer. I saw him say it at the beginning of a Bloggingheads video and it was completely different from my naive reading.
“Alicorn” is pronounced just like “unicorn”, except that the “yoon” is replaced with “al” as in “Albert” or “Alabama”. So the I is an “ih”, not an “eh”, but you can get away with an undifferentiated schwa.
Thanks!
(I think that’s how I was saying it, actually—I wasn’t sure how to write the second syllable.)
ell-ee-EZZ-er (is how I hear it).
*checks*
Yvain’s fictional MIT classmates.
I swear that wasn’t on purpose.
What’s fictional about that?
Ready to pony up money for a bet that I can’t produce a warm body meeting that description?
I prefer not to gamble, but just to satisfy my own curiosity: what would the controls be on such a bet? Presumably you would have to prove to Knight’s satisfaction that your unbelieving belief-signaler was legitimately thus.
I think my evidence is strong enough I can trust Douglas_Knight’s own intellectual integrity.
Huh. My last couple of interactions with you, you called me a liar.
Okay, I found what I think you’re referring to. Probably not my greatest moment here, but Is that really something you want sympathy for? Here’s the short version of what happened.
You: If you think your comment was so important, don’t leave it buried deep in the discussion, where nobody can see it.
Me: But I also linked to it from a more visible place. Did you not know about that?
You: [Ignoring previous mischaracterization] Well, that doesn’t solve the problem of context. I clicked on it and couldn’t understand it, and it seemed boring.
Me: Wait, you claim to be interested in a solution, I post a link saying I have one, and it’s too much of a bother to read previous comments for context? That doesn’t make sense. Your previous comment implies you didn’t know about the higher link. Don’t dig yourseelf deeper by covering it up.
Oh, yeah, I’d forgotten that one. Actually, I was thinking of the following week.
I just want you to go away. I was hoping that reminding you that you don’t believe me would discourage you from talking to me.
That’s not calling you a liar. That’s criticizing the merit of your argument. There’s a difference.
The link provided by Douglas seems to suggest that Douglas’s accusation is false (as well as ineffective).
ET:S/petty/ineffective/
Would you mind elaborating on your take on that thread? What’s of most interest to me is what you think I meant, but I’m also interested in whether you’d say that Silas called Zack a liar.
Let’s go back a few steps. You said that in your “last few interactions” with me, I called you a liar. You later clarified that you were thinking of this discussion. But I didn’t deny calling Zack a liar in that discussion; I denied calling you a liar. So why are you suddenly acting like your original claim was about whether I called Zack a liar?
(In any case, it wasn’t just “Zack, you liar”. My remark was more like, “this is what you claimed, this is why it’s implausible, this is why your comments are hindering the discussion, please stop making this so difficult by coming up with ever-more-convoluted stories.”)
Are you and Zack the same person?
Considering that the earlier discussion was about whether you can arbitrarily redefine yourself as a different person, maybe Zack/Douglas are just taking the whole idea a little too seriously! :-P
(And in a show of further irony, that would be just the kind of subtle point that Zack and [?] Douglas, severely overestimating its obviousness, were defending in the thread!)
No.
I apologize to third parties for the poor timing of my deletion of the above comment. It was really addressed to wedrifid and broadcasting it was petty, though not as petty as the excerpt looks.
Alright, well, good luck “getting the goods” on ol’ Silas! Just make sure not to get your claims mixed up again...
Well, what possessed you to lie to me? ;-)
j/k, j/k, you’re good, you’re good.
A link would be nice though.
And I believe that, even taking into account any previous mistrust I might have had of you, I think my evidence is still strong enough that I can trust you consider it conclusive.