Am I the only one unconfortable with this example ? In all games of chance, the issue is not about winning or losing once. It’s about the probability of winning and the expected value of betting on the long term. So if you have 1⁄132 chance of winning, but you win 10 millions times your bet, you should be willing to bet as much as possible and the probability you would be a winner is better than 50% (in money won, not times played and won) Same with poker, an expert player is never guaranteed to win, expert players are maybe 60⁄40 favorites to win over bad players, and after 100 hands, they are huge favorites to end up with more money.
Now, about quantifying the number of bits of information to prove a scientific theory, you would need to know the number of possible theories (one correct theory and all the others wrong). Since the number of theories which are incorrect can be made infinite, quantifying the number of bits seems to me an unsatisfying approach to quantify how much a theory is likely to be true.
Am I the only one unconfortable with this example ?
In all games of chance, the issue is not about winning or losing once. It’s about the probability of winning and the expected value of betting on the long term.
So if you have 1⁄132 chance of winning, but you win 10 millions times your bet, you should be willing to bet as much as possible and the probability you would be a winner is better than 50% (in money won, not times played and won)
Same with poker, an expert player is never guaranteed to win, expert players are maybe 60⁄40 favorites to win over bad players, and after 100 hands, they are huge favorites to end up with more money.
Now, about quantifying the number of bits of information to prove a scientific theory, you would need to know the number of possible theories (one correct theory and all the others wrong). Since the number of theories which are incorrect can be made infinite, quantifying the number of bits seems to me an unsatisfying approach to quantify how much a theory is likely to be true.