I don’t know about the role of this assumption in AI, which is what you seem to care most about. But I think I can answer about its role in philosophy.
One thing I want from epistemology is a model of ideally rational reasoning, under uncertainty. One way to eliminate a lot of candidates for such a model is to show that they make some kind of obvious mistake. In this case, the mistake is judging something as a good bet when really it is guaranteed to lose money.
I don’t know about the role of this assumption in AI, which is what you seem to care most about. But I think I can answer about its role in philosophy.
One thing I want from epistemology is a model of ideally rational reasoning, under uncertainty. One way to eliminate a lot of candidates for such a model is to show that they make some kind of obvious mistake. In this case, the mistake is judging something as a good bet when really it is guaranteed to lose money.