In any real problem there is a context and some prior information. Bayes doesn’t give this to you—you give it to Bayes along with the data and turn the crank on the machinery to get the posterior. The things you’re confident about are in the context.
In theory, if you can change your mind about something, you have uncertainty about it, and your prior distribution should reflect that. In practice, you abstract the uncertainty away by making some simplifying assumptions, do the analysis conditional on your assumptions, and reserve the right to revisit the assumptions if they don’t seem adequate.
I didn’t mean to ask how a bayesian changes his or her mind. I meant to ask how the thing you believe in can be in the context in situations where you change your mind based on new evidence.
Let’s say I’m weighing some acrylamide powder on an electronic balance. (Gonna make me some polyacrylamide gel!) The balance is so sensitive that small changes in air pressure register in the last two digits. From what I know about air pressure variations from having done this before, I create a model for the data. Also because I’ve done this before, I can eyeball roughly how much powder I’ve got on the balance; this determines my prior distribution before reading the balance. Then I observe some data from the balance readout and update my distribution.
I can’t tell without more information whether that’s an example of what I mean by “changing your mind.” Here’s one that I think definitely qualifies:
Let’s say you’re going to bet on a coin toss. You only have a small amount of information on the coin, and you decide for whatever reason that there’s a 51% chance of getting heads. So you’re going to bet on heads. But then you realize that there’s a way to get more data.
At this point, I’m thinking, “Gee, I hardly know anything about this coin. Maybe I’m better off betting on tails and I just don’t know it. I should get that data.”
What I think you’re saying about bayesians is that a bayesian would say, “Gee, 51% isn’t very high. I’d like to be at least 80% sure. Since I don’t know very much yet, it wouldn’t take much more to get to 80%. I should get that data so I can bet on heads with confidence.”
Which sort of makes sense but is also a little strange.
Technical stuff: under the standard assumption of infinite exchangeability of coin tosses, there exists some limiting relative frequency for coin toss results. (This is de Finetti’s theorem.)
Key point: I have a probability distribution for this relative frequency (call it f) -- not a probability of a probability.
You only have a small amount of information on the coin, and you decide for whatever reason that there’s a 51% chance of getting heads. So you’re going to bet on heads. But then you realize that there’s a way to get more data.
Here you’ve said that my probability density for f is dispersed, but slightly asymmetric. I too can say, “Well, I have an awful lot of probability mass on values of f less than 0.5. I should collect more information to tighten this up.”
“Gee, 51% isn’t very high. I’d like to be at least 80% sure. Since I don’t know very much yet, it wouldn’t take much more to get to 80%. I should get that data so I can bet on heads with confidence.”
This mixes up f on the one hand with my distribution for f on the other. I can certainly collect data until I’m 80% sure that f is bigger than 0.5 (provided that f really is bigger than 0.5). This is distinct from being 80% sure of getting heads on the next toss.
I guess I just don’t understand the difference between bayesianism and
frequentism. If I had seen your discussion of limiting relative
frequency somewhere else, I would have called it frequentist.
I think I’ll go back to borrowing bits and pieces. (Thank you for some
nice ones.)
The key difference is that a frequentist would not admit the legitimacy of a distribution for f—the data are random, so they get a distribution, but f is fixed, although unknown. Bayesians say that quantities that are fixed but unknown get probability distributions that encode the information we have about them.
So, more data is good because it makes you more confident? I guess that makes sense, but it still seems strange not to care what you’re confident in.
In any real problem there is a context and some prior information. Bayes doesn’t give this to you—you give it to Bayes along with the data and turn the crank on the machinery to get the posterior. The things you’re confident about are in the context.
What about changing your mind?
In theory, if you can change your mind about something, you have uncertainty about it, and your prior distribution should reflect that. In practice, you abstract the uncertainty away by making some simplifying assumptions, do the analysis conditional on your assumptions, and reserve the right to revisit the assumptions if they don’t seem adequate.
I didn’t mean to ask how a bayesian changes his or her mind. I meant to ask how the thing you believe in can be in the context in situations where you change your mind based on new evidence.
Let’s say I’m weighing some acrylamide powder on an electronic balance. (Gonna make me some polyacrylamide gel!) The balance is so sensitive that small changes in air pressure register in the last two digits. From what I know about air pressure variations from having done this before, I create a model for the data. Also because I’ve done this before, I can eyeball roughly how much powder I’ve got on the balance; this determines my prior distribution before reading the balance. Then I observe some data from the balance readout and update my distribution.
I can’t tell without more information whether that’s an example of what I mean by “changing your mind.” Here’s one that I think definitely qualifies:
Let’s say you’re going to bet on a coin toss. You only have a small amount of information on the coin, and you decide for whatever reason that there’s a 51% chance of getting heads. So you’re going to bet on heads. But then you realize that there’s a way to get more data.
At this point, I’m thinking, “Gee, I hardly know anything about this coin. Maybe I’m better off betting on tails and I just don’t know it. I should get that data.”
What I think you’re saying about bayesians is that a bayesian would say, “Gee, 51% isn’t very high. I’d like to be at least 80% sure. Since I don’t know very much yet, it wouldn’t take much more to get to 80%. I should get that data so I can bet on heads with confidence.”
Which sort of makes sense but is also a little strange.
Technical stuff: under the standard assumption of infinite exchangeability of coin tosses, there exists some limiting relative frequency for coin toss results. (This is de Finetti’s theorem.)
Key point: I have a probability distribution for this relative frequency (call it f) -- not a probability of a probability.
Here you’ve said that my probability density for f is dispersed, but slightly asymmetric. I too can say, “Well, I have an awful lot of probability mass on values of f less than 0.5. I should collect more information to tighten this up.”
This mixes up f on the one hand with my distribution for f on the other. I can certainly collect data until I’m 80% sure that f is bigger than 0.5 (provided that f really is bigger than 0.5). This is distinct from being 80% sure of getting heads on the next toss.
I guess I just don’t understand the difference between bayesianism and frequentism. If I had seen your discussion of limiting relative frequency somewhere else, I would have called it frequentist.
I think I’ll go back to borrowing bits and pieces. (Thank you for some nice ones.)
The key difference is that a frequentist would not admit the legitimacy of a distribution for f—the data are random, so they get a distribution, but f is fixed, although unknown. Bayesians say that quantities that are fixed but unknown get probability distributions that encode the information we have about them.