OK, ‘compression’ is the wrong analogy as it implies that we don’t lose any information. I’m not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the ‘proof’ into question.
Me: “No, X is bad news. I can’t remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X.”
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.
Hmm, I guess that’s why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.
Compression seems a fine analogy to me, as long as we’re talking about mp3′s and flv’s, rather than zip’s and tar’s.
It may be useful shorthand to say “X is good”, but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement “Bayes’ Theorem is valid, true, and useful in updating probabilities” collapses into “Bayes’ Theorem is good,” we invite the abuse of Bayes’ Theorem.
So I wouldn’t say it’s always a bad thing, but I’d say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
That is a good question for a statistician, and I am not a statistician.
One thing that leaps to mind, however, is two-boxing on Newcomb’s Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don’t begin to understand suggests that either response to Newcomb’s problem is defensible using Bayesian nets.
There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.
Also, it’s struck me that a frequentist statistician might call most Bayesian uses of the theorem “abuses.”
I’m not sure those are really good examples, but I hope they’re satisfying.
OK, ‘compression’ is the wrong analogy as it implies that we don’t lose any information. I’m not sure this is always a bad thing. I might have use of a particular theorem. Being the careful sort, I work through the proof. Satisfied, I add the theorem to my grab bag of tricks (yay product rule!). In a couple of weeks (hours even...) I have forgotten the details of the proof, but I have enough confidence in my own upvote of the theorem to keep using it. The details are no longer relevant unless some other evidence comes along that brings the theorem, and thus the ‘proof’ into question.
This drives me crazy when it happens to me.
Someone: “Shall we invite X?”
Me: “No, X is bad news. I can’t remember at all how I came to this conclusion, but I recently observed something and firmly set a bad news flag against X.”
Those kinds of flags are the only way I can remember what I like. My memory is poor enough that I lose most details about books and movies within a few months, but if I really liked something, that 5-Yay rating sticks around for years.
Hmm, I guess that’s why part of my brain still thinks Moulin Rouge, which I saw on a very enjoyable date, and never really had the urge to actually watch again, is one of my favorite movies.
Compression seems a fine analogy to me, as long as we’re talking about mp3′s and flv’s, rather than zip’s and tar’s.
tar’s are archived, not compressed. tar.gz’s are compressed.
I think of it as memoisation rather than compression.
It may be useful shorthand to say “X is good”, but when we forget the specific boundaries of that statement and only remember the shorthand, it becomes a liability. When we decide that the statement “Bayes’ Theorem is valid, true, and useful in updating probabilities” collapses into “Bayes’ Theorem is good,” we invite the abuse of Bayes’ Theorem.
So I wouldn’t say it’s always a bad thing, but I’d say it introduces unnecessary ambiguity and contributes to sub-optimal moral reasoning.
Do you have some good examples of abuse of Bayes’ theorem?
That is a good question for a statistician, and I am not a statistician.
One thing that leaps to mind, however, is two-boxing on Newcomb’s Problem using assumptions about the prior probability of box B containing $1,000,000. Some new work using math that I don’t begin to understand suggests that either response to Newcomb’s problem is defensible using Bayesian nets.
There could be more trivial cases, too, where a person inputs unreasonable prior probabilities and uses cargo-cult statistics to support some assertion.
Also, it’s struck me that a frequentist statistician might call most Bayesian uses of the theorem “abuses.”
I’m not sure those are really good examples, but I hope they’re satisfying.