Those intuitions point to belief in the robustness of the payout. My thinking is more pointed towards repeateability. Like “you get a one time offer” means you can’t repeat it but by default “X offers to play game Y” means that if you which to play 3 Y games you may and if you wish to play 1000 Y then you may. But “I can do this all day long” is actually different from being able to do it infinite times. Even an action or game that takes 3 seconds can be done only so many times over a 24 hour period. Attitudes of “I can do this all day long” and “I can do this all year long” are different and an attitude of “I can do this for all of eternity” is rarely actually exhibited.
I guess there are 4-5 categories and we can even assume all of them are expected utility positive. There is mild chances like a game of dice say where you win on 4⁄6 or 1⁄6 outcomes. Then there is steep chances like lottery where 1 in a million payouts. Then there is arbitrary chance games like St Petersburg where payout happens in vanishingly small portion. And then there is infinidesimal win chances where an infinite payout (I guess pascal muggins go here). It seems like the first 2 are okay to recommend to take as rational actions if the EV is positive and it seems recommendation to play is counterintuitive for the later 2. And I am suspecting that there is a game between lottery and St Peterburg that is still finite but which I would try to argue is okay to turn down. Something like locating the correct grain of sand in the galaxy giving you the ownership of it. Even if you could submit to the contest with the mere act out pointing out a grain of sand recommendation to spend a couple of decades shifting throught sand seems extreme.
It might lead to a principle that says that butterflies should not buy “house loses” lottery tickets.
In Magic the Gathering on some formats there is no theorethical upper limit on how many cards you deck may contain. However some events and rulings that wish to impact this do it so that players must be able to shuffle their decks without assistance and do so in a manner that doesn’t stretch the timeframe of the match. So theoretically if one developed better hand coordination to shuffle more efficiently one could get and advantage by having access to decks that benefit from a big catalogue of things to tutor up. So this kind of “edge limitation” actually in place but is often rounded or or ignored (“you can have as many cards as you like” is an accurate enough portrayal for most purposes)
My point was that, for significant potential payouts, unstated factors will dominate the decision. These impact one’s intuition, and lead to a false diagnosis of “irrational”.
It’s pretty darned rational to avoid things that sound like scams, unless you have the energy and knowledge to know the difference. If someone’s offering me $lots out of the blue, it’s probably a trick.
In addition, there are declining marginal utility and second-order effects (like how it impacts your future reputation and self image) that are very hard to model, and get included in our instincts—rational but illegible.
If something is evolutionarily selected but is implemented as a reflex I have a icky feeling calling such calculations “rational”.
I guess for real situations a certain amount of “fighting the actual” is pretty much always relevant. Just because you have recognised and formulated the problem one way doesn’t mean you have construed it correctly.
Those intuitions point to belief in the robustness of the payout. My thinking is more pointed towards repeateability. Like “you get a one time offer” means you can’t repeat it but by default “X offers to play game Y” means that if you which to play 3 Y games you may and if you wish to play 1000 Y then you may. But “I can do this all day long” is actually different from being able to do it infinite times. Even an action or game that takes 3 seconds can be done only so many times over a 24 hour period. Attitudes of “I can do this all day long” and “I can do this all year long” are different and an attitude of “I can do this for all of eternity” is rarely actually exhibited.
I guess there are 4-5 categories and we can even assume all of them are expected utility positive. There is mild chances like a game of dice say where you win on 4⁄6 or 1⁄6 outcomes. Then there is steep chances like lottery where 1 in a million payouts. Then there is arbitrary chance games like St Petersburg where payout happens in vanishingly small portion. And then there is infinidesimal win chances where an infinite payout (I guess pascal muggins go here). It seems like the first 2 are okay to recommend to take as rational actions if the EV is positive and it seems recommendation to play is counterintuitive for the later 2. And I am suspecting that there is a game between lottery and St Peterburg that is still finite but which I would try to argue is okay to turn down. Something like locating the correct grain of sand in the galaxy giving you the ownership of it. Even if you could submit to the contest with the mere act out pointing out a grain of sand recommendation to spend a couple of decades shifting throught sand seems extreme.
It might lead to a principle that says that butterflies should not buy “house loses” lottery tickets.
In Magic the Gathering on some formats there is no theorethical upper limit on how many cards you deck may contain. However some events and rulings that wish to impact this do it so that players must be able to shuffle their decks without assistance and do so in a manner that doesn’t stretch the timeframe of the match. So theoretically if one developed better hand coordination to shuffle more efficiently one could get and advantage by having access to decks that benefit from a big catalogue of things to tutor up. So this kind of “edge limitation” actually in place but is often rounded or or ignored (“you can have as many cards as you like” is an accurate enough portrayal for most purposes)
My point was that, for significant potential payouts, unstated factors will dominate the decision. These impact one’s intuition, and lead to a false diagnosis of “irrational”.
It’s pretty darned rational to avoid things that sound like scams, unless you have the energy and knowledge to know the difference. If someone’s offering me $lots out of the blue, it’s probably a trick.
In addition, there are declining marginal utility and second-order effects (like how it impacts your future reputation and self image) that are very hard to model, and get included in our instincts—rational but illegible.
If something is evolutionarily selected but is implemented as a reflex I have a icky feeling calling such calculations “rational”.
I guess for real situations a certain amount of “fighting the actual” is pretty much always relevant. Just because you have recognised and formulated the problem one way doesn’t mean you have construed it correctly.