Note: posting this in Main rather than Discussion in light of recent discussion that people don’t post in Main enough and their reasons for not doing so aren’t necessarily good ones. But I suspect I may be reinventing the wheel here, and someone else has in fact gotten farther on this problem than I have. If so, I’d be very happy if someone could point me to existing discussion of the issue in the comments.
tldr; Gambling-based arguments in the philosophy of probability can be seen as depending on a convenient simplification of assuming people are far more willing to gamble than they are in real life. Some justifications for this simplification can be given, but it’s unclear to me how far they can go and where the justification starts to become problematic.
In “Intelligence Explosion: Evidence and Import,” Luke and Anna mention the fact that, “Except for weather forecasters (Murphy and Winkler 1984), and successful professional gamblers, nearly all of us give inaccurate probability estimates...” When I read this, it struck me as an odd thing to say in a paper on artificial intelligence. I mean, those of us who are not professional accountants tend to make bookkeeping errors, and those of us who are not math, physics, engineering, or economics majors make mistakes on GRE quant questions that we were supposed to have learned how to do in our first two years of high school. Why focus on this particular human failing?
A related point can be made about Dutch Book Arguments in the philosophy of probability. Dutch Book Arguments claim, in a nutshell, that you should reason in accordance with the axioms of probability because if you don’t, a clever bookie will be able to take all your money. But another way to prevent a clever bookie from taking all your money is to not gamble. Which many people don’t, or at least do rarely.
Dutch Book Arguments seem to implicitly make what we might call the “willing gambler assumption”: everyone always has a precise probability assignment for every proposition, and they’re willing to take any bet which has a non-negative expected value given their probability assignments. (Or perhaps: everyone is always willing to take at least one side of any proposed bet.) Needless to say, even people who gamble a lot generally aren’t that eager to gamble.
So how does anyone get away with using Dutch Book arguments for anything? A plausible answer comes from a joke Luke recently told in his article on Fermi estimates:
Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, “I have the solution, but it only works in the case of spherical cows in a vacuum.”
If you’ve studied physics, you know that physicists don’t just use those kinds of approximations when doing Fermi estimates; often they can be counted on to yield results that are in fact very close to reality. So maybe the willing gambler assumption works as a sort of spherical cow, that allows philosophers working on issues related to probability to generate important results in spite of the unrealistic nature of the assumption.
Some parts of how this would work are fairly clear. In real life, bets have transaction costs; they take time and effort to set up and collect. But it doesn’t seem too bad to ignore that fact in thought experiments. Similarly, in real life money has declining marginal utility; the utility of doubling your money is less than the disutility of losing your last dollar. In principle, if you know someone’s utility function over money, you can take a bet with zero expected value in dollar terms and replace it with a bet that has zero expected value in utility terms. But ignoring that and just using dollars for your thought experiments seems like an acceptable simplification for convenience’s sake.
Even making those assumptions so that it isn’t definitely harmful to accept bets with zero expected (dollar) value, we might still wonder why our spherical cow gambler should accept them. Answer: because if necessary you could just add one penny to the side of the bet you want the gambler to take, but always having to mention the extra penny is annoying, so you may as well assume the gambler takes any bet with non-negative expected value rather than require positive expected value.
Another thing that keeps people from gambling more in real life is the principle that if you can’t spot the sucker in the room, it’s probably you. If you’re unsure whether an offered bet is favorable to you, the mere fact that someone is offering it to you is pretty strong evidence that it’s in their favor. One way to avoid this problem is to stipulate that in Dutch Book Arguments, we just assume the bookie doesn’t know anything more about whatever the bets are about than the person being offered the bet, and the person being offered the bet knows this. The bookie has to construct her book primarily based on knowing the propensities of the other person to bet. Nick Bostrom explicitly makes such an assumption in a paper on the sleeping beauty problem. Maybe other people explicitly make this assumption, I don’t know.
In this last case, though, it’s not totally clear whether limiting the bookie’s knowledge is all you need to bridge the gap between the willing gambler assumption and how people behave in real life. In real life, people don’t often make very exact probability assignments, and may be aware of their confusion about how to make exact probability assignments. Given that, it seems reasonable to hesitate in making bets (even if you ignore transaction costs and declining marginal utility and know that the bookie doesn’t know any more about the subject of the bet than you do), because you’d still know the bookie might be trying to exploit your confusion over how to make exact probability assignments.
At an even simpler level, you might adopt a rule, “before making multiple bets on related questions, check to make sure you aren’t guaranteeing you’ll lose money.” After all, real bookies offer odds such that if anyone was stupid enough to bet on each side of a question with the same bookie, they’d be guaranteed to lose money. In a sense, bookies could be interpreted as “money pumping” the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they’re apparently irrational enough to be gambling in the first place.
In the end, I’m confused about how useful the willing gambler assumption really is when doing philosophy of probability. It certainly seems like worthwhile work gets done based on it, but just how applicable are those results to real life? How do we tell when we should reject a result because the willing gambler assumption causes problems in that particular case? I don’t know.
One possible justification for the willing gambler assumption is that even those of us who don’t literally gamble, ever, still must make decisions where the outcome is not certain, and we therefore we need to do a decent job of making probability assignments for those situations. But there are lots of people who are successful at their chosen field (including in fields that require decisions with uncertain outcomes) who aren’t weather forecasters or professional gamblers, and therefore can be expected to make inaccurate probability estimates. Conversely, it doesn’t seem that the skills acquired by successful professional gamblers give them much of an edge in other fields. Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don’t specifically require them is weak.
Another justification for pursing lines of inquiry based on the willing gambler assumption, a justification that will be particularly salient for people on LessWrong, is that if we want to build an AI based on an idealization of how rational agents think (Bayesianism or whatever), we need tools like the willing gambler assumption to figure out how to get the idealization right. That sounds like a plausible thought at first. But if we flawed humans have any hope of building a good AI, it seems like an AI that’s as flawed as (but no more flawed than) humans should also have a hope of self-improving into something better. An AI might be programmed in a way that makes it a bad gambler, but aware of this limitation, and left to decide for itself whether, when it self-improves, it wants to focus on improving its gambling ability or improving other aspects of itself.
As someone who cares a lot about AI, this issue of just how useful various idealizations are for thinking about AI and possibly programming an AI one day are especially important to me. Unfortunately, I’m not sure what to say about them, so at this point I’ll turn the question over to the comments.
Willing gamblers, spherical cows, and AIs
Note: posting this in Main rather than Discussion in light of recent discussion that people don’t post in Main enough and their reasons for not doing so aren’t necessarily good ones. But I suspect I may be reinventing the wheel here, and someone else has in fact gotten farther on this problem than I have. If so, I’d be very happy if someone could point me to existing discussion of the issue in the comments.
tldr; Gambling-based arguments in the philosophy of probability can be seen as depending on a convenient simplification of assuming people are far more willing to gamble than they are in real life. Some justifications for this simplification can be given, but it’s unclear to me how far they can go and where the justification starts to become problematic.
In “Intelligence Explosion: Evidence and Import,” Luke and Anna mention the fact that, “Except for weather forecasters (Murphy and Winkler 1984), and successful professional gamblers, nearly all of us give inaccurate probability estimates...” When I read this, it struck me as an odd thing to say in a paper on artificial intelligence. I mean, those of us who are not professional accountants tend to make bookkeeping errors, and those of us who are not math, physics, engineering, or economics majors make mistakes on GRE quant questions that we were supposed to have learned how to do in our first two years of high school. Why focus on this particular human failing?
A related point can be made about Dutch Book Arguments in the philosophy of probability. Dutch Book Arguments claim, in a nutshell, that you should reason in accordance with the axioms of probability because if you don’t, a clever bookie will be able to take all your money. But another way to prevent a clever bookie from taking all your money is to not gamble. Which many people don’t, or at least do rarely.
Dutch Book Arguments seem to implicitly make what we might call the “willing gambler assumption”: everyone always has a precise probability assignment for every proposition, and they’re willing to take any bet which has a non-negative expected value given their probability assignments. (Or perhaps: everyone is always willing to take at least one side of any proposed bet.) Needless to say, even people who gamble a lot generally aren’t that eager to gamble.
So how does anyone get away with using Dutch Book arguments for anything? A plausible answer comes from a joke Luke recently told in his article on Fermi estimates:
If you’ve studied physics, you know that physicists don’t just use those kinds of approximations when doing Fermi estimates; often they can be counted on to yield results that are in fact very close to reality. So maybe the willing gambler assumption works as a sort of spherical cow, that allows philosophers working on issues related to probability to generate important results in spite of the unrealistic nature of the assumption.
Some parts of how this would work are fairly clear. In real life, bets have transaction costs; they take time and effort to set up and collect. But it doesn’t seem too bad to ignore that fact in thought experiments. Similarly, in real life money has declining marginal utility; the utility of doubling your money is less than the disutility of losing your last dollar. In principle, if you know someone’s utility function over money, you can take a bet with zero expected value in dollar terms and replace it with a bet that has zero expected value in utility terms. But ignoring that and just using dollars for your thought experiments seems like an acceptable simplification for convenience’s sake.
Even making those assumptions so that it isn’t definitely harmful to accept bets with zero expected (dollar) value, we might still wonder why our spherical cow gambler should accept them. Answer: because if necessary you could just add one penny to the side of the bet you want the gambler to take, but always having to mention the extra penny is annoying, so you may as well assume the gambler takes any bet with non-negative expected value rather than require positive expected value.
Another thing that keeps people from gambling more in real life is the principle that if you can’t spot the sucker in the room, it’s probably you. If you’re unsure whether an offered bet is favorable to you, the mere fact that someone is offering it to you is pretty strong evidence that it’s in their favor. One way to avoid this problem is to stipulate that in Dutch Book Arguments, we just assume the bookie doesn’t know anything more about whatever the bets are about than the person being offered the bet, and the person being offered the bet knows this. The bookie has to construct her book primarily based on knowing the propensities of the other person to bet. Nick Bostrom explicitly makes such an assumption in a paper on the sleeping beauty problem. Maybe other people explicitly make this assumption, I don’t know.
In this last case, though, it’s not totally clear whether limiting the bookie’s knowledge is all you need to bridge the gap between the willing gambler assumption and how people behave in real life. In real life, people don’t often make very exact probability assignments, and may be aware of their confusion about how to make exact probability assignments. Given that, it seems reasonable to hesitate in making bets (even if you ignore transaction costs and declining marginal utility and know that the bookie doesn’t know any more about the subject of the bet than you do), because you’d still know the bookie might be trying to exploit your confusion over how to make exact probability assignments.
At an even simpler level, you might adopt a rule, “before making multiple bets on related questions, check to make sure you aren’t guaranteeing you’ll lose money.” After all, real bookies offer odds such that if anyone was stupid enough to bet on each side of a question with the same bookie, they’d be guaranteed to lose money. In a sense, bookies could be interpreted as “money pumping” the public as a whole. But somehow, it turns out that any single individual will rarely be stupid enough to take both sides of the same bet from the same bookie, in spite of the fact that they’re apparently irrational enough to be gambling in the first place.
In the end, I’m confused about how useful the willing gambler assumption really is when doing philosophy of probability. It certainly seems like worthwhile work gets done based on it, but just how applicable are those results to real life? How do we tell when we should reject a result because the willing gambler assumption causes problems in that particular case? I don’t know.
One possible justification for the willing gambler assumption is that even those of us who don’t literally gamble, ever, still must make decisions where the outcome is not certain, and we therefore we need to do a decent job of making probability assignments for those situations. But there are lots of people who are successful at their chosen field (including in fields that require decisions with uncertain outcomes) who aren’t weather forecasters or professional gamblers, and therefore can be expected to make inaccurate probability estimates. Conversely, it doesn’t seem that the skills acquired by successful professional gamblers give them much of an edge in other fields. Therefore, it seems that the relationship between being able to make accurate probability estimates and success in fields that don’t specifically require them is weak.
Another justification for pursing lines of inquiry based on the willing gambler assumption, a justification that will be particularly salient for people on LessWrong, is that if we want to build an AI based on an idealization of how rational agents think (Bayesianism or whatever), we need tools like the willing gambler assumption to figure out how to get the idealization right. That sounds like a plausible thought at first. But if we flawed humans have any hope of building a good AI, it seems like an AI that’s as flawed as (but no more flawed than) humans should also have a hope of self-improving into something better. An AI might be programmed in a way that makes it a bad gambler, but aware of this limitation, and left to decide for itself whether, when it self-improves, it wants to focus on improving its gambling ability or improving other aspects of itself.
As someone who cares a lot about AI, this issue of just how useful various idealizations are for thinking about AI and possibly programming an AI one day are especially important to me. Unfortunately, I’m not sure what to say about them, so at this point I’ll turn the question over to the comments.