Humans are not only gambling when another human explicitly offers them a bet. Humans implicitly gamble all the time: for example, when you cross the street, you’re gambling that the probability that you might get hit by a car and die doesn’t outweigh whatever gain you expect from crossing the street (e.g. getting to school or work). Dutch book arguments in this context are an argument that if an agent doesn’t play according to the rules of probability, then under adversarial assumptions the world can screw them over. It’s valuable to know what can happen under adversarial assumptions even if you don’t expect those assumptions to hold.
Therefore, it seems that making inaccurate probability estimates is compatible with success in a fields that require making decisions with uncertain outcomes.
This isn’t strong evidence; you’re mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).
Don’t you think humans cross the street not because they’ve weighed the benefits versus the dangers, or some such, but because that’s what they’ve been taught to do, and probability calculations be damned?
When you live in a county where many people drive without seatbealts, you’re prone to emulate that behavior. It’s not like you’re collectively “betting” in a different manner, or evaluating the dangers differently. It’s more of a monkey-see, monkey-do heuristic.
Just because you don’t understand the game you’re playing doesn’t mean you’re not playing it. The street is offering you a bet, and if you don’t understand that, then… well, not much happens, but the bet is still there.
By the same token, fish in an aquarium—or Braitenberg vehicles—are constantly engaging in bets they don’t realize. Swim to this side, be first to the food but exert energy getting there.
Your perspective is valid, but if the agents refuse/are incapable of seeing the situation from a betting perspective, you have to ask how useful it is (not necessarily thinking in estimated utility, best case, worst case etcetera, but in the “betting” aspect of it). It may be a good intuition pump, as long as we keep in mind that people don’t work that way.
Do fish think in terms of expected value? Of course not. Evolutions make bets, and they can’t think at all. Refactored Agency is a valuable tool—anything that can be usefully as a goal-seeking process with uncertain knowledge can also be modeled usefully as making bets. How useful is it to view arbitrary things through different models? Well, Will Newsome makes a practice of it. So, it’s probably good for having insights, but caveat emptor.
The more complete the models describe the underlying phenomenon, the more isomorphic all models should be (in their Occamian formulation), until eventually we’re only exchanging variable names.
Yes; to check your visual acuity, you block off one eye, then open that one and block the other. To check (and improve) your conceptual acuity, you block off everything that isn’t an agent, then you block of everything that isn’t an algorithm, then you block of everything that isn’t an institution, etc.
Unless you can hypercompute, in which case that’s probably not a useful heuristic.
this is off topic but I’m really disappointed that braitenberg vehicles didn’t turn out to be wheeled fish tanks that allowed the fish to explore your house
Don’t you think humans cross the street not because they’ve weighed the benefits versus the dangers, or some such, but because that’s what they’ve been taught to do, and probability calculations be damned?
What they’ve been taught to do is weigh the benefits versus the dangers (although there are not necessarily any probability calculations gong on). The emphasis in teaching small children how to cross the road is mainly on the dangers, since those will invariably be of a vastly larger scale than the trifling benefit of saving a few seconds by not looking.
Does “Mommy told me to look for cars, or bad things happen” and “if I don’t look before I cross, Mommy will punish me” count as weighing the benefits versus the dangers? If so, we agree.
I just wonder if the bet analogy is the most natural way of carving up reality, as it were.
Why did the rationalist cross the road? - He made a bet. (Badum-tish!)
Does “Mommy told me to look for cars, or bad things happen” and “if I don’t look before I cross, Mommy will punish me” count as weighing the benefits versus the dangers?
Perhaps these things are done differently in different cultures. This is how it is done in the U.K. Notice the emphasis throughout on looking to see if it is safe, not on rules to obey because someone says so and punishment, which figures not at all.
The earlier “Kerb Drill” mentioned in that article was a set of rules: look right, look left, look right again, and if clear, cross. That is why it was superceded.
One thing I should have mentioned earlier: it’s one thing to claim that humans implicitly gamble all the time, another to claiming that they implicitly assign probabilities when they do. It seems like when people make decisions whose outcomes they aren’t sure of, most of the time “they’re using heuristics that bypass probability” is a better model of their behavior than “they’re implicitly assigning such-and-such probabilities.”
Well, I think that depends on what you mean by “implicitly.” As I mentioned in another comment, I think there’s a difference between assigning probabilities in System 1 and assigning probabilities in System 2, and that probably many people are good at the former in their domains of expertise but bad at the latter. Which do you mean?
I’m standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This means that if I cross one street, and then change my mind about where I want to eat based on the fact that I didn’t die, I’ve been dutch-booked by reality.
That might need a few more elements before it actually restricts you to VNM-rationality.
It’s valuable to know what can happen under adversarial assumptions even if you don’t expect those assumptions to hold.
That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.
This isn’t strong evidence; you’re mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).
Tweaked the wording, is that better? (“Compatible” was a weasel word anyway.)
Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don’t specifically require them is weak.
I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don’t explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn’t evolve to cope with. See, for example, Cosmides and Tooby 1996.
Humans are not only gambling when another human explicitly offers them a bet. Humans implicitly gamble all the time: for example, when you cross the street, you’re gambling that the probability that you might get hit by a car and die doesn’t outweigh whatever gain you expect from crossing the street (e.g. getting to school or work). Dutch book arguments in this context are an argument that if an agent doesn’t play according to the rules of probability, then under adversarial assumptions the world can screw them over. It’s valuable to know what can happen under adversarial assumptions even if you don’t expect those assumptions to hold.
This isn’t strong evidence; you’re mixing up P(is successful | makes good probability estimates) with P(makes good probability estimates | is successful).
Don’t you think humans cross the street not because they’ve weighed the benefits versus the dangers, or some such, but because that’s what they’ve been taught to do, and probability calculations be damned?
When you live in a county where many people drive without seatbealts, you’re prone to emulate that behavior. It’s not like you’re collectively “betting” in a different manner, or evaluating the dangers differently. It’s more of a monkey-see, monkey-do heuristic.
Just because you don’t understand the game you’re playing doesn’t mean you’re not playing it. The street is offering you a bet, and if you don’t understand that, then… well, not much happens, but the bet is still there.
By the same token, fish in an aquarium—or Braitenberg vehicles—are constantly engaging in bets they don’t realize. Swim to this side, be first to the food but exert energy getting there.
Your perspective is valid, but if the agents refuse/are incapable of seeing the situation from a betting perspective, you have to ask how useful it is (not necessarily thinking in estimated utility, best case, worst case etcetera, but in the “betting” aspect of it). It may be a good intuition pump, as long as we keep in mind that people don’t work that way.
Do fish think in terms of expected value? Of course not. Evolutions make bets, and they can’t think at all. Refactored Agency is a valuable tool—anything that can be usefully as a goal-seeking process with uncertain knowledge can also be modeled usefully as making bets. How useful is it to view arbitrary things through different models? Well, Will Newsome makes a practice of it. So, it’s probably good for having insights, but caveat emptor.
The more complete the models describe the underlying phenomenon, the more isomorphic all models should be (in their Occamian formulation), until eventually we’re only exchanging variable names.
Yes; to check your visual acuity, you block off one eye, then open that one and block the other. To check (and improve) your conceptual acuity, you block off everything that isn’t an agent, then you block of everything that isn’t an algorithm, then you block of everything that isn’t an institution, etc.
Unless you can hypercompute, in which case that’s probably not a useful heuristic.
this is off topic but I’m really disappointed that braitenberg vehicles didn’t turn out to be wheeled fish tanks that allowed the fish to explore your house
What they’ve been taught to do is weigh the benefits versus the dangers (although there are not necessarily any probability calculations gong on). The emphasis in teaching small children how to cross the road is mainly on the dangers, since those will invariably be of a vastly larger scale than the trifling benefit of saving a few seconds by not looking.
Does “Mommy told me to look for cars, or bad things happen” and “if I don’t look before I cross, Mommy will punish me” count as weighing the benefits versus the dangers? If so, we agree.
I just wonder if the bet analogy is the most natural way of carving up reality, as it were.
Why did the rationalist cross the road? - He made a bet. (Badum-tish!)
Perhaps these things are done differently in different cultures. This is how it is done in the U.K. Notice the emphasis throughout on looking to see if it is safe, not on rules to obey because someone says so and punishment, which figures not at all.
The earlier “Kerb Drill” mentioned in that article was a set of rules: look right, look left, look right again, and if clear, cross. That is why it was superceded.
One thing I should have mentioned earlier: it’s one thing to claim that humans implicitly gamble all the time, another to claiming that they implicitly assign probabilities when they do. It seems like when people make decisions whose outcomes they aren’t sure of, most of the time “they’re using heuristics that bypass probability” is a better model of their behavior than “they’re implicitly assigning such-and-such probabilities.”
Well, I think that depends on what you mean by “implicitly.” As I mentioned in another comment, I think there’s a difference between assigning probabilities in System 1 and assigning probabilities in System 2, and that probably many people are good at the former in their domains of expertise but bad at the latter. Which do you mean?
What would be such adversarial assumptions in your street-crossing example?
I’m standing at a 4-way intersection. I want to go the best restaurant at the intersection. To the west is a three-star restaurant, to the north is a two-star restaurant, and to the northwest, requiring two street-crossings, is a four-star restaurant. All of the streets are equally safe to cross except for the one in between the western restaurant and the northern one, which is more dangerous. So going west, then north is strictly dominated by going north, then west. Going north and eating there is strictly dominated by going west and eating there. This means that if I cross one street, and then change my mind about where I want to eat based on the fact that I didn’t die, I’ve been dutch-booked by reality.
That might need a few more elements before it actually restricts you to VNM-rationality.
Where is reality’s corresponding utility gain?
The bad news is there is none. The good news is that this means, under linear transformation, that there is such a thing as a free lunch!
That sounds right, the question is the extent of that value, and what it means for doing epistemology and decision theory and so on.
Tweaked the wording, is that better? (“Compatible” was a weasel word anyway.)
I would still dispute this claim. My guess of how most fields work is that successful people in those fields have good System 1 intuitions about how their fields work and can make good intuitive probability estimates about various things even if they don’t explicitly use Bayes. Many experiments purporting to show that humans are bad at probability may be trying to force humans to solve problems in a format that System 1 didn’t evolve to cope with. See, for example, Cosmides and Tooby 1996.
Thanks. I was not familiar with that hypothesis, will have to look at C&T’s paper.