What if you have the option c: think and figure out that the actual chance is 1 in a billion. This completely summarizes the issue. If there’s 3 populations of agents:
A: agents whom grossly overestimate chance of winning but somehow don’t buy the ticket (perhaps the reasoning behind the estimate, due to it’s sloppiness, is not given enough weight against the ‘too good to be true’ heuristics),
B: agents whom grossly overestimate chance of winning, and buy the ticket,
C: agents whom correctly estimate chance of winning and don’t buy the ticket.
C does the best, A does the second best, and B loses. B may also think itself a rationalist, but is behaving in an irrational manner due to not accommodating for B’s cognitive constraints. Perhaps agents from A whom read of cognitive biases and decide that they don’t have those biases, become agents in B, while to become an agent in C you have to naturally have something, and also train yourself.
What if you have the option c: think and figure out that the actual chance is 1 in a billion. This completely summarizes the issue. If there’s 3 populations of agents:
A: agents whom grossly overestimate chance of winning but somehow don’t buy the ticket (perhaps the reasoning behind the estimate, due to it’s sloppiness, is not given enough weight against the ‘too good to be true’ heuristics),
B: agents whom grossly overestimate chance of winning, and buy the ticket,
C: agents whom correctly estimate chance of winning and don’t buy the ticket.
C does the best, A does the second best, and B loses. B may also think itself a rationalist, but is behaving in an irrational manner due to not accommodating for B’s cognitive constraints. Perhaps agents from A whom read of cognitive biases and decide that they don’t have those biases, become agents in B, while to become an agent in C you have to naturally have something, and also train yourself.