The second assumption, however, is harder to justify. There are many ways that a calculation of odds could go wrong (putting a decimal point in the wrong place, making a multiplication error, unknowingly misunderstanding the laws of probability, actually being insane, etc.) If we could really enumerate all of them, understand how they effect our computed payout probability, and estimate the probability of each occurring, then we could compute this missing factor exactly. As things stand though, it is probably untenable. It should not be expected though that errors that make the payout probability artificially larger will balance those that make it artificially smaller. Misplacing a decimal point, for example, will almost certainly be noticed if it leads to a percentage greater than 100%, but not if it leads to one that is less than that (creating an asymmetry).
This is a valid point, and one I missed in my writeup. (Toby_Ord said something similar, but that was in response to a specific question.)
It is probably a useful skill to recognize asymmetries in the possible direction of error, such as that which you pointed out. I can see two ways to handle this:
a. Additional terms in the derivation, such as P(decimal-point error) and P(sign error), with the e term restricted to the unanticipated-error case. b. Modification of e.
This is a valid point, and one I missed in my writeup. (Toby_Ord said something similar, but that was in response to a specific question.)
It is probably a useful skill to recognize asymmetries in the possible direction of error, such as that which you pointed out. I can see two ways to handle this:
a. Additional terms in the derivation, such as P(decimal-point error) and P(sign error), with the e term restricted to the unanticipated-error case.
b. Modification of e.