Perhaps—obviously each coin is flipped just once, i.e. Binomial(n=1,p), which is the same thing as Bernoulli(p). I was trying to point out that for any other n it would work the same as a normal coin, if someone were to keep flipping it.
rstarkov
And just as it gets really interesting, that chapter ends. There is no solution provided for stage 4 :/
Bayesianism tells us that there is a unique answer in the form of a probability for the next coin to be heads
I’m obviously new to this whole thing, but is this a largely undebated, widely accepted view on probabilities? That there are NO situations in which you can’t meaningfully state a probability?
For example, let’s say we have observed 100 samples of a real-valued random variable. We can use the maximum entropy principle, and thus use the normal distribution (whcih is maximal-entropy for unbounded reals). We then use standard methods to estimate population mean, and can even provide a probability that it’s in a certain interval.
But how valid is this result when we knew nothing of the original distribution? What if it was something awkward like the Cauchy distribution? It has no mean; so our interval is meaningless. You can’t just say that “well, we’re 60% certain it’s in this interval, that leaves 40% chance of us being wrong”—because it doesn’t; the mean isn’t outside the interval either! A complete answer would allow for a third outcome, that the mean isn’t defined, but how exactly do you assign a number to this probability?
With this in mind, do we still believe that it’s not wrong (or less wrong? :D) to assume a normal distribution, make our calculations and decide how much you’d bet that the mean of the next 100,000 samples is in the range −100..100? (the sample means of Cauchy distributions diverge as you add more samples)
I read this to say that you can’t calculate a value that is guaranteed to break even in the long term, because there isn’t enough information to do this. (which I tend to agree with)
If I were trying to make a profit then I’d need to know how much to charge for entry. If I could calculate that then yes, I’d offer the bet regardless of how many heads came out of 100 trials.
But this is entirely beside the point; the purpose of this thought experiment is for me to show which parts of bayesianism I don’t understand and solicit some feedback on those parts.
In particular, a procedure that I could use to actually pick a break-even price of entry would be very helpful.
You take the evidence, and you decide that you pay X. Then we run it lots of times. You pay X, I pick a random coin and flip it. I pay your winnings. You pay X again, I pick again, etc. X is fixed.
Preferably, let other people play the game first to gather the evidence at no cost to myself.
For the record, this is not permitted.
My take at it is basically this: average over all possible distributions
It’s easy to say this but I don’t think this works when you start doing the maths to get actual numbers out. Additionally, if you really take ALL possible distributions then you’re already in trouble, because some of them are pretty weird—e.g. the Cauchy distribution doesn’t have a mean or a variance.
distribution about which we initially don’t know anything and gradually build up evidence
I’d love to know if there are established formal approaches to this. The only parts of statistics that I’m familiar with assume known distributions and work from there. Anyone?
The properties of the pool are unknown to you, so you have to take into account the possibility that I’ve tuned them somehow. But you do know that the 100 coins I drew from that pool were drawn fairly and randomly.
I have clarified my post to specify that for each flip, I pick a coin from this infinite pool at random. Suppose you also magically know with absolute certainty that these givens are true. Still $10?
Bayesianism in the face of unknowns
This is a good point, and I’ve pondered on this for a while.
Following your logic: we can observe that I’m not spending all my waking time caring about A (people dying somewhere for some reason). Therefore we can conclude that the death of those people is comparable to mundane things I choose to do instead—i.e. the mundane things are not infinitely less important than someone’s death.
But this only holds if my decision to do the mundane things in preference to saving someone’s life is rational.
I’m still wondering whether I do the mundane things by rationally deciding that they are more important than my contribution to saving someone’s life could be, or by simply being irrational.
I am leaning towards the latter—which means that someone’s death could still be infinitely worse to me than something mundane, except that this fact is not accounted for in my decision making because I am not fully rational no matter how hard I try.
The original description of the problem doesn’t mention if you know of Omega’s strategy for deciding what to place in box B, or their success history in predicting this outcome—which is obviously a very important factor.
If you know these things, then the only rational choice, obviously and by a huge margin, is to pick only box B.
If you don’t know anything other than box B may or may not contain a million dollars, and you have no reasons to believe that it’s unlikely, like in the lottery, then the only rational decision is to take both. This also seems to be completely obvious and unambiguous.
But since this community has spent a while debating this, I conclude that there’s a good chance I have missed something important. What is it?
I don’t know. I don’t suppose you claim to know at which point the number of dust specks is small enough that they are preferable to 50 years of torture?
(which is why I think that Idea 2 is a better way to reason about this)
Argh, I have accidentally reported your comment instead of replying. I did wonder why it asks me if I’m sure… Sorry.
It does indeed appear that the only rational approach is for them to be treated as comparable. I was merely trying to suggest a possible underlying basis for people consistently picking dust specks, regardless of the hugeness of the numbers involved.
I think Torture vs Dust Specks makes a hidden assumption that the two things are comparable. It appears that people don’t actually think like that; even an infinite amount of dust specks are worse than a single person being tortured or dying. People arbitrarily place some bad things into a category that’s infinitely worse than another category.
So, I’d say that you aren’t preferring morality; you are simply placing 50 years of torture as infinitely worse than a dust speck; no number people getting dust specks can possibly be worse than 50 years of torture.
Idea 1: dust specks, because on a linear scale (which seems to be always assumed in discussions of utility here) I think 50 years of torture is more than 3^^^3 times worse than a dust speck in one’s eye.
Idea 2: dust specks, because most people arbitrarily place bad things into incomparable categories. The death of your loved one is deemed to be infinitely worse than being stuck in an airport for an hour. It is incomparable; any amount of 1 hour waits are less bad than a single loved one dying.
Thanks for this, it really helped.
Here’s how I understand this point, that finally made things clearer:
Yes, there exists a more accurate answer, and we might even be able to discover it by investing some time. But until we do, the fact that such an answer exists is completely irrelevant. It is orthogonal to the problem.
In other words, doing the calculations would give us more information to base our prediction on, but knowing that we can do the calculation doesn’t change it in the slightest.
Thus, we are justified to treat this as “don’t know at all”, even though it seems that we do know something.
Great read, and I think things have finally fit into the right places in my head. Now I just need to learn to guesstimate what the maximum entropy distribution might look like for a given set of facts :)
Well, that and how to actually churn out confidence intervals and expected values for experiments like this one, so that I know how much to bet given a particular set of knowledge.