I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn’t change the potential rationality of actual playing it. I.e. that money and value don’t necessarily have a linear relationship, and so optimising for EV is not rational.
Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome.
Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate.
I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future.
I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn’t change the potential rationality of actual playing it. I.e. that money and value don’t necessarily have a linear relationship, and so optimising for EV is not rational.
Of course, that just means you maximise expected utility rather than expected money. (I was almost going to write “expected value” instead of “expected utility” as you used the word “value”, but obviously that would be confusing in this context...)
Yes, absolutely, apologies for my unfamiliarity with the terms.
The point I’m trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable.
Thank you for your comments.
I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn’t change the potential rationality of actual playing it. I.e. that money and value don’t necessarily have a linear relationship, and so optimising for EV is not rational.
Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome.
Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate.
I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future.
Of course, that just means you maximise expected utility rather than expected money. (I was almost going to write “expected value” instead of “expected utility” as you used the word “value”, but obviously that would be confusing in this context...)
Yes, absolutely, apologies for my unfamiliarity with the terms.
The point I’m trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable.