Let’s say Alice and Bob are in two different rooms and can’t see each other. Alice rolls a 6-sided die and looks at the outcome. Bob doesn’t know the outcome, but knows that the die has been rolled. In your interpretation of the word “probability”, can Bob talk about the probabilities of the different roll outcomes after Alice rolled?
I’m having a hard time answering this question with “yes” or “no”:
The event in question is “Alice rolling a particular number on a 6-sided die.” Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to increase the chances that his decision is accurate.
Yes… I guess?
(Or, are you referring to something like: If Alice rolled a 5, then there is a 100% chance she rolled a 5?)
Well, the key point here is whether the word “probability” can be applied to things which already happened but you don’t know what exactly happened. You said
A quantitative thing that indicates how likely it is for an event to happen.
which implies that probabilities apply only to the future. The question is whether you can speak of probabilities as lack of knowledge about something which is already “fixed”.
Another issue is that in your definition you just shifted the burden of work to the word “likely”. What does it mean that an event is “likely” or “not likely” to happen?
EDIT: The neighboring comment here, raises the same point (using the same type of example!). I wouldn’t have posted this duplicate comment if I had caught this in time.
I’m also confused about the debate.
Isn’t the “thing that hasn’t happened yet” always an anticipated experience? (Even if we use a linguistic shorthand like “the dice roll is 6 with probability .5″.)
Suppose Alice tells Bob she has rolled the dice, but in reality she waits until after Bob has already done his calculations and secretly rolls the dice right before Bob walks in the room. Could Bob have any valid complaint about this?
Once you translate into anticipated experiences of some observer in some situation, it seems like the difference between the two camps is about the general leniency with which we grant that the observer can make additional assumptions about their situation. But I don’t see how you can opt out of assuming something: Any framing of the P(“sun will rise tomorrow”) problem has to implicitly specify a model, even if it’s the infinite-coin-flip model.
Sorry, I didn’t mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.
That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.
When you say “apply a probability to something,” I think:
“If one were to have to make a decision based on whether or not event A will happen, how would one consider the available data in making this decision?”
The only time event A happening matters is if it happening generated new data. In the Bob-Alice situation, Alice rolling a die in separate room gives zero information to Bob—so whether or not she already rolled it doesn’t matter. Here are a couple of different situations to illustrate:
A) Bob and Alice are in different rooms. Alice rolls the die and Bob has to guess the number she rolled.
B) Bob has to guess the number that Alice’s die will roll. Alice then rolls the die.
C) Bob watches alice roll the die, but did not see the outcome. Bob must guess the number rolled.
D) Bob is a supercomputer which can factor in every infinitesimal fact about how Alice rolls the die, and the die itself upon seeing the roll. Bob-the-supercomputer watches Alice roll the die, but did not see the outcome.
In situations A, B, and C—whether or not Alice rolls the die before or after Bob’s guess is irrelevant. It doesn’t change anything about Bob’s decison. For all intents and purposes, the questions “What did Alice roll?” and “What will Alice roll?” are exactly the same question. That is: We assume the system is simple enough that rolling a fair die is always the same. In situation D, the questions are different because there’s different information available depending on whether or not Alice rolled already. That is, the assumption of a simple-system isn’t there because Bob is able to see the complexity of the situation and make the exact same kind of decision. Alice having actually rolled the dice does matter.
I don’t quite understand your “likely or not likely” question. To try to answer: If an event is likely to happen, then your uncertainty that it will happen is low. If it is not likely, then your uncertainty that it will happen is high.
(Sorry, I totally did not expect this reply to be so long.)
So, you are interpreting probabilities as subjective beliefs, then? That is a Bayesian, but not the frequentist approach.
Having said that, it’s useful to realize that the concept of probability has many different… aspects and in some situations it’s better to concentrate on some particular aspects. For example if you’re dealing with quality control and acceptable tolerances in an industrial mass production environment, I would guess that the frequentist aspect would be much more convenient to you than a Bayesian one :-)
If an event is likely to happen, then your uncertainty that it will happen is low.
You may want to reformulate this, as otherwise there’s lack of clarity with respect to the uncertainty about the event vs. the uncertainty about your probability for the event. But otherwise you’re still saying that probabilities are subjective beliefs, right?
Let’s say Alice and Bob are in two different rooms and can’t see each other. Alice rolls a 6-sided die and looks at the outcome. Bob doesn’t know the outcome, but knows that the die has been rolled. In your interpretation of the word “probability”, can Bob talk about the probabilities of the different roll outcomes after Alice rolled?
I’m having a hard time answering this question with “yes” or “no”:
The event in question is “Alice rolling a particular number on a 6-sided die.” Bob, not knowing what Alice rolled, can talk about the probabilities associated with rolling a fair die many times, and base whatever decision he has to make from this probability (assuming that she is, in fact, using a fair die). Depending on the assumed complexity of the system (does he know that this is a loaded die?), he could convolute a bunch of other probabilities together to increase the chances that his decision is accurate.
Yes… I guess?
(Or, are you referring to something like: If Alice rolled a 5, then there is a 100% chance she rolled a 5?)
Well, the key point here is whether the word “probability” can be applied to things which already happened but you don’t know what exactly happened. You said
which implies that probabilities apply only to the future. The question is whether you can speak of probabilities as lack of knowledge about something which is already “fixed”.
Another issue is that in your definition you just shifted the burden of work to the word “likely”. What does it mean that an event is “likely” or “not likely” to happen?
EDIT: The neighboring comment here, raises the same point (using the same type of example!). I wouldn’t have posted this duplicate comment if I had caught this in time.
I’m also confused about the debate.
Isn’t the “thing that hasn’t happened yet” always an anticipated experience? (Even if we use a linguistic shorthand like “the dice roll is 6 with probability .5″.)
Suppose Alice tells Bob she has rolled the dice, but in reality she waits until after Bob has already done his calculations and secretly rolls the dice right before Bob walks in the room. Could Bob have any valid complaint about this?
Once you translate into anticipated experiences of some observer in some situation, it seems like the difference between the two camps is about the general leniency with which we grant that the observer can make additional assumptions about their situation. But I don’t see how you can opt out of assuming something: Any framing of the P(“sun will rise tomorrow”) problem has to implicitly specify a model, even if it’s the infinite-coin-flip model.
Sorry, I didn’t mean to imply that probabilities only apply to the future. Probabilities apply only to uncertainty.
That is, given the same set of data, there should be no difference between event A happening, and you having to guess whether or not it happened, and event A not having happened yet—and you having to guess whether or not it will happen.
When you say “apply a probability to something,” I think:
The only time event A happening matters is if it happening generated new data. In the Bob-Alice situation, Alice rolling a die in separate room gives zero information to Bob—so whether or not she already rolled it doesn’t matter. Here are a couple of different situations to illustrate:
A) Bob and Alice are in different rooms. Alice rolls the die and Bob has to guess the number she rolled. B) Bob has to guess the number that Alice’s die will roll. Alice then rolls the die. C) Bob watches alice roll the die, but did not see the outcome. Bob must guess the number rolled. D) Bob is a supercomputer which can factor in every infinitesimal fact about how Alice rolls the die, and the die itself upon seeing the roll. Bob-the-supercomputer watches Alice roll the die, but did not see the outcome.
In situations A, B, and C—whether or not Alice rolls the die before or after Bob’s guess is irrelevant. It doesn’t change anything about Bob’s decison. For all intents and purposes, the questions “What did Alice roll?” and “What will Alice roll?” are exactly the same question. That is: We assume the system is simple enough that rolling a fair die is always the same. In situation D, the questions are different because there’s different information available depending on whether or not Alice rolled already. That is, the assumption of a simple-system isn’t there because Bob is able to see the complexity of the situation and make the exact same kind of decision. Alice having actually rolled the dice does matter.
I don’t quite understand your “likely or not likely” question. To try to answer: If an event is likely to happen, then your uncertainty that it will happen is low. If it is not likely, then your uncertainty that it will happen is high.
(Sorry, I totally did not expect this reply to be so long.)
So, you are interpreting probabilities as subjective beliefs, then? That is a Bayesian, but not the frequentist approach.
Having said that, it’s useful to realize that the concept of probability has many different… aspects and in some situations it’s better to concentrate on some particular aspects. For example if you’re dealing with quality control and acceptable tolerances in an industrial mass production environment, I would guess that the frequentist aspect would be much more convenient to you than a Bayesian one :-)
You may want to reformulate this, as otherwise there’s lack of clarity with respect to the uncertainty about the event vs. the uncertainty about your probability for the event. But otherwise you’re still saying that probabilities are subjective beliefs, right?