I’m glad to see that my idea was understood. I have read all the replies, but unfortunately I’ve came up with all those ideas when trying to prove the idea to myself.
You meet a bored billionaire who offers you the chance to play a game. The outcome of the game is decided by a single coin flip. If the coin comes up heads, you win a million dollars. If it comes up tails, you win nothing.
The bored billionaire enjoys watching people squirm, so he demands that you pay $10,000 for a single chance to play this game.
I’ve thought of this, but it’s only an intuitive thing and doesn’t directly prove my approach. Or if it does, I’m missing something.
One common way of thinking of expected values is as a long-run average. So If I keep playing a game with an expected loss of $10, that means that in the long run it becomes more and more probable that I’ll lose an average of about $10 per game.
I’ve thought of this too, but all it does is to result in a different percentage, closer to the expected outcome. But it’s still a percentage, and it remains such if I don’t reach an infinite number of trials.
Forget about the intuitive explanation, is there any evidence at all that 50% chance of winning 10$ is the same as 100% chance of winning 5$, in terms of efficiency? I can hardly imagine the expected value approach to be not valid, but I can’t find evidence either. Most of the people I want to explain it to would understand it.
I have trouble understanding your setup in the second example. How is your lacking $10 to survive is related to the 30% chance of saving 10 people?
I’m glad to see that my idea was understood. I have read all the replies, but unfortunately I’ve came up with all those ideas when trying to prove the idea to myself.
I’ve thought of this, but it’s only an intuitive thing and doesn’t directly prove my approach. Or if it does, I’m missing something.
I’ve thought of this too, but all it does is to result in a different percentage, closer to the expected outcome. But it’s still a percentage, and it remains such if I don’t reach an infinite number of trials.
Forget about the intuitive explanation, is there any evidence at all that 50% chance of winning 10$ is the same as 100% chance of winning 5$, in terms of efficiency? I can hardly imagine the expected value approach to be not valid, but I can’t find evidence either. Most of the people I want to explain it to would understand it.
It’s not related. It was a separate example.