When I say something is good or bad (“yay doggies!”) it’s usually a kind of shorthand:
pizza is good == pizza tastes good and is fun to make and share
seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once
yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.
I suspect when most people use the words ‘good’ and ‘bad’ they are using just this kind of linguistic compression. Or is your point that once a ‘good’ label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I’m not sure what you want me to conclude.
Or is your point that once a ‘good’ label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it?
Exactly that. We may be able to recall our reasoning if we try to, but we’re likely to throw in a few extra false justifications on top, and to forget about the other side.
OK, ‘compression’ is the wrong analogy as it implies that we don’t lose any information. I’m not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the ‘proof’ into question.
Me: “No, X is bad news. I can’t remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X.”
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.
Hmm, I guess that’s why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.
Compression seems a fine analogy to me, as long as we’re talking about mp3′s and flv’s, rather than zip’s and tar’s.
It may be useful shorthand to say “X is good”, but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement “Bayes’ Theorem is valid, true, and useful in updating probabilities” collapses into “Bayes’ Theorem is good,” we invite the abuse of Bayes’ Theorem.
So I wouldn’t say it’s always a bad thing, but I’d say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
That is a good question for a statistician, and I am not a statistician.
One thing that leaps to mind, however, is two-boxing on Newcomb’s Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don’t begin to understand suggests that either response to Newcomb’s problem is defensible using Bayesian nets.
There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.
Also, it’s struck me that a frequentist statistician might call most Bayesian uses of the theorem “abuses.”
I’m not sure those are really good examples, but I hope they’re satisfying.
Exactly that. We may be able to recall our reasoning if we try to, but we’re likely to throw in a few extra false justifications on top, and to forget about the other side.
I suspect it’s more likely that we won’t remember it at all; we’d simply increase the association between the thing and goodness and, if looking for a reason, will rationalize one on the spot. Our minds are very good at coming up with explanations but not good at remembering details.
Of course, if your values and knowledge haven’t changed significantly, you’ll likely confabulate something very similar to the original reasoning; but as the distance increases between the points of decision and rationalization, the accuracy is likely to drop.
When I say something is good or bad (“yay doggies!”) it’s usually a kind of shorthand:
pizza is good == pizza tastes good and is fun to make and share
seafood is bad == most cheap seafood is reprocessed offcuts and gave me food poisoning once
yay doggies == I find canine companions to be beneficial for my exercise routine, useful for home security and fun to play with.
I suspect when most people use the words ‘good’ and ‘bad’ they are using just this kind of linguistic compression. Or is your point that once a ‘good’ label is assigned we just increment its goodness index and forget the detailed reasoning that led us to it? Sorry, the post was an interesting read but I’m not sure what you want me to conclude.
Exactly that. We may be able to recall our reasoning if we try to, but we’re likely to throw in a few extra false justifications on top, and to forget about the other side.
OK, ‘compression’ is the wrong analogy as it implies that we don’t lose any information. I’m not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the ‘proof’ into question.
This drives me crazy when it happens to me.
Someone: “Shall we invite X?”
Me: “No, X is bad news. I can’t remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X.”
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.
Hmm, I guess that’s why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.
Compression seems a fine analogy to me, as long as we’re talking about mp3′s and flv’s, rather than zip’s and tar’s.
tar’s are archived, not compressed. tar.gz’s are compressed.
I think of it as memoisation rather than compression.
It may be useful shorthand to say “X is good”, but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement “Bayes’ Theorem is valid, true, and useful in updating probabilities” collapses into “Bayes’ Theorem is good,” we invite the abuse of Bayes’ Theorem.
So I wouldn’t say it’s always a bad thing, but I’d say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
Do you have some good examples of abuse of Bayes’ theorem?
That is a good question for a statistician, and I am not a statistician.
One thing that leaps to mind, however, is two-boxing on Newcomb’s Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don’t begin to understand suggests that either response to Newcomb’s problem is defensible using Bayesian nets.
There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.
Also, it’s struck me that a frequentist statistician might call most Bayesian uses of the theorem “abuses.”
I’m not sure those are really good examples, but I hope they’re satisfying.
I suspect it’s more likely that we won’t remember it at all; we’d simply increase the association between the thing and goodness and, if looking for a reason, will rationalize one on the spot. Our minds are very good at coming up with explanations but not good at remembering details.
Of course, if your values and knowledge haven’t changed significantly, you’ll likely confabulate something very similar to the original reasoning; but as the distance increases between the points of decision and rationalization, the accuracy is likely to drop.