Strangely, I have a cached thought of, “That’s bullshit.” This pings almost everything I hear said by people in a particular verbal/non-verbal pattern. For some reason, when someone says something in a manner that matches this verbal/non-verbal pattern I think, “That’s bullshit.” It doesn’t even matter what they are saying. It fires and afterwards I think about it and wonder if it really is bogus.
If someone tells me that love isn’t rational it is very likely that their communication style is going to ping, “That’s bullshit.” Adding a contrarian viewpoint to everything seems to help prevent me from inserting new cached thoughts. It doesn’t, however, help me find currently cached thoughts. Also, I have learned to internalize the response because telling everyone they were wrong wasn’t helping my social life.
This seems to be a cached thought in reaction to a partly physical event. Does this fall under the label cached thought?
thomblake recently posted a comment that has a great antidote for cached thoughts. Reverse the claim and see if it (a) triggers another cached thought or (b) seems as likely given cursory examination.
Now that you’ve read this blog post, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!
Well, I won’t. I will be thinking, “Bullshit!”
PS) What are the naughty language expectations here?
Random discussion points related to this behavior:
How does something like Wikipedia relate to cached thoughts?
How do you find cached thoughts in yourself?
How many cached thoughts are hanging around simply to provide excuses for stupid or selfish behavior? Does anyone actually believe love is irrational, or do they merely belief in their belief that love is irrational?
I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
Strangely, I have a cached thought of, “That’s bullshit.” This pings almost everything I hear said by people in a particular verbal/non-verbal pattern. For some reason, when someone says something in a manner that matches this verbal/non-verbal pattern I think, “That’s bullshit.” It doesn’t even matter what they are saying. It fires and afterwards I think about it and wonder if it really is bogus.
If someone tells me that love isn’t rational it is very likely that their communication style is going to ping, “That’s bullshit.” Adding a contrarian viewpoint to everything seems to help prevent me from inserting new cached thoughts. It doesn’t, however, help me find currently cached thoughts. Also, I have learned to internalize the response because telling everyone they were wrong wasn’t helping my social life.
This seems to be a cached thought in reaction to a partly physical event. Does this fall under the label cached thought?
thomblake recently posted a comment that has a great antidote for cached thoughts. Reverse the claim and see if it (a) triggers another cached thought or (b) seems as likely given cursory examination.
Well, I won’t. I will be thinking, “Bullshit!”
PS) What are the naughty language expectations here?
Random discussion points related to this behavior:
How does something like Wikipedia relate to cached thoughts?
How do you find cached thoughts in yourself?
How many cached thoughts are hanging around simply to provide excuses for stupid or selfish behavior? Does anyone actually believe love is irrational, or do they merely belief in their belief that love is irrational?
I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
As far as love being irrational, there actually is some evidence for that.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
No, that makes sense to me. You have essentially no information about whether a statement is more likely to be true or false at that percentage range.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
Pssh. Always learning something new, I guess.