I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
I liked this post a lot. I have the same “bullshit” sense for certain words and thoughts, but my concern is that this is just a bias caused by extrapolating from one example. There are certain political issues, for instance, that I’ve seen so many illogical arguments for that I’m biased against them now.
As far as love being irrational, there actually is some evidence for that.
Hmm… actually, you made me realize there is another part to this reaction. I tend to ignore not-beliefs. I draw beliefs on my map. There isn’t a place for a not-belief. An active negative belief can be drawn, but I see this differently than refusing to accept a belief due to lack of evidence.
In other words, I see a difference between, “I don’t believe the Earth is flat” and “I believe the Earth is not flat.”
I have an argument about this distinction pretty frequently, though. I have no idea how LessWrong feels about it. Also, I am making these terms up as I go along. There are probably more accurate ways to say what I am saying.
But the point is that the “bullshit” response drops its victim into the realm of not-belief. As such, I forget about it and when the question pops up again there isn’t anything in that area of the map to contend with the proposed answer. If the reaction is, again, “bullshit,” nothing will change.
In a more Bayesian framework, you assign each statement a probability of being true, based on all the evidence you’ve collected so far. You then change these probabilities based on new evidence. An active negative belief corresponds to a low probability, and refusing to accept a belief based on lack of evidence might correspond to a slightly higher probability.
Okay, sure, that makes sense. I guess I have a weird middle range between, say, 45-55% that I just drop the belief from the probability matrix altogether because I am lazy and don’t want to keep track of everything. The impact on my actions is negligible until well beyond this threshold.
An exception would be something in which I have done a lot of studying/research. The information, in this case, is extremely valuable. The belief still sits in the “Undecided” category, but I am not throwing out all that hard work.
Is this sort of thing completely sacrilegious toward the Way of Bayes? Note that 45-55% is just a range I made up on the spot. I don’t actually have such a range defined; it just matches my behavior when translating me into Bayes.
No, that makes sense to me. You have essentially no information about whether a statement is more likely to be true or false at that percentage range.
Sort-of agree. The Bayesian formulation of a similar strategy is: Don’t bother remembering an answer to a question when that answer is the same as what you would derive from the ignorance prior. i.e. discard evidence whose likelihood ratio is near 1. However, the prior isn’t always 50%.
Cool. I guess I never thought about what the distinction between active and passive disbelief would be for a Bayesian. It makes perfect sense now that I think about it… and it would have certainly made a whole bunch of discussions in my past a lot easier.
Pssh. Always learning something new, I guess.