Has anybody thought of prediction markets as a form of insurance? Suppose you don’t like Hillary, then you can bet she wins the nomination. If she doesn’t, you’re happy because you don’t like her. If she does, you win some money, either way you win.
Of course, if people did this it would make prediction markets less accurate.
I know Steven Hawking did. He bet against the existence of black holes. If it turned out all that work he did was worthless, at least he’d get a free magazine subscription.
If you are getting the insurance by placing bets, then people will be buying the candidates they don’t like. If lots of people don’t like a candidate, that is not a sign that the candidate will do well.
Insurance adds liquidity to the market. Someone has to pay the speculators a premium for being right, and sometimes that’s the insurance-buyers. Higher premiums for speculators → better quality speculation.
If you are getting the insurance by placing bets, then people will be buying the candidates they don’t like. If lots of people don’t like a candidate, that is not a sign that the candidate will do well.
But speculators should be able to get free money from those buying insurance, which will even it out some.
To the extent that they can distinguish, yes, and this may very well leave the markets performing well enough (and perhaps still better than anything else). It does necessarily add noise, however, which I understood to be the original point—it makes things less accurate.
But wouldn’t rational bettors’ willingness-to-pay for a stake in a candidate be the same in both cases (buying for insurance vs. buying as a speculator)? Their WTP would be determined entirely by odds, right?
Example: Llewelyn has (in your view) a 5-1 chance of winning. Maxine, whom you despise, has 1-5. Let’s say I offer to sell you a voucher that is redeemable for $5 in the event the hated Maxine wins (and is just worthless paper otherwise). How much would you be willing to pay for this voucher?
I can’t speak for you personally, but wouldn’t the money-maximizer pay up to $1 for it? (In five possible worlds, L wins and you lost a dollar; in one, M wins and you get five bucks; pay a cent more and expect to lose money.)
And my point is this: The fact that you hate Maxine was irrelevant all along. You (or, again, our hypothetical rational agent) should be willing to pay up to $1 no matter how you feel about Llewelyn and Maxine, assuming a given estimate of their likeliness to win.
If I have errred, please do point it out where my map is wrong.
Why should it depend on how much you stand to lose if Maxine wins? No matter the value (as long as it doesn’t approach your total wealth—or is this what you meant?) to you of Maxine’s election, you’ll gain more if you bet following the odds more closely.
Yes, if the people selling you insurance are rational, then you would gain more on average by not buying insurance, putting edge cases aside. That is true both of ordinary insurance and bets taken for insurance.
But the point of insurance is to reduce risk, not maximize gain.
For example, suppose Maxine winning would cost my business $200, and I cannot lose more than $50 and stay in business. Then I see a bet that pays $200 if Maxine wins, which costs me $50 to buy. It would be worth taking that bet regardless of the probability of Maxine winning, if I’m very risk-averse regarding losing my business. It turns a possible loss of $200 into a guaranteed loss of $50.
If the actual probability of Maxine winning is 10%, then the expected value of not betting is $-20, while the expected value of betting is $-50, so if I want to maximize gain I should not take the bet. However, taking the bet has a maximum loss of $50, while not taking the bet has a maximum loss of $200 (costing me my business), so in taking the bet I’ve gone from a 10% chance of losing my business to a 0% chance of losing my business. So if I want to minimize the probability of losing my business (all else equal) I should take the bet.
Has anybody thought of prediction markets as a form of insurance? Suppose you don’t like Hillary, then you can bet she wins the nomination. If she doesn’t, you’re happy because you don’t like her. If she does, you win some money, either way you win.
Of course, if people did this it would make prediction markets less accurate.
I know Steven Hawking did. He bet against the existence of black holes. If it turned out all that work he did was worthless, at least he’d get a free magazine subscription.
Why would it make prediction markets less accurate? Does this problem adversely affect the prices of actual insurance?
This might be an interesting question. Also, the following version:
If you are getting the insurance by placing bets, then people will be buying the candidates they don’t like. If lots of people don’t like a candidate, that is not a sign that the candidate will do well.
Insurance adds liquidity to the market. Someone has to pay the speculators a premium for being right, and sometimes that’s the insurance-buyers. Higher premiums for speculators → better quality speculation.
But speculators should be able to get free money from those buying insurance, which will even it out some.
To the extent that they can distinguish, yes, and this may very well leave the markets performing well enough (and perhaps still better than anything else). It does necessarily add noise, however, which I understood to be the original point—it makes things less accurate.
But wouldn’t rational bettors’ willingness-to-pay for a stake in a candidate be the same in both cases (buying for insurance vs. buying as a speculator)? Their WTP would be determined entirely by odds, right?
Example: Llewelyn has (in your view) a 5-1 chance of winning. Maxine, whom you despise, has 1-5. Let’s say I offer to sell you a voucher that is redeemable for $5 in the event the hated Maxine wins (and is just worthless paper otherwise). How much would you be willing to pay for this voucher?
I can’t speak for you personally, but wouldn’t the money-maximizer pay up to $1 for it? (In five possible worlds, L wins and you lost a dollar; in one, M wins and you get five bucks; pay a cent more and expect to lose money.)
And my point is this: The fact that you hate Maxine was irrelevant all along. You (or, again, our hypothetical rational agent) should be willing to pay up to $1 no matter how you feel about Llewelyn and Maxine, assuming a given estimate of their likeliness to win.
If I have errred, please do point it out where my map is wrong.
No. In short: with insurance you’re paying money to reduce risk. Thus, WTP goes up.
It depends how much I stand to lose regardless of betting, if Maxine wins.
Why should it depend on how much you stand to lose if Maxine wins? No matter the value (as long as it doesn’t approach your total wealth—or is this what you meant?) to you of Maxine’s election, you’ll gain more if you bet following the odds more closely.
Yes, if the people selling you insurance are rational, then you would gain more on average by not buying insurance, putting edge cases aside. That is true both of ordinary insurance and bets taken for insurance.
But the point of insurance is to reduce risk, not maximize gain.
For example, suppose Maxine winning would cost my business $200, and I cannot lose more than $50 and stay in business. Then I see a bet that pays $200 if Maxine wins, which costs me $50 to buy. It would be worth taking that bet regardless of the probability of Maxine winning, if I’m very risk-averse regarding losing my business. It turns a possible loss of $200 into a guaranteed loss of $50.
If the actual probability of Maxine winning is 10%, then the expected value of not betting is $-20, while the expected value of betting is $-50, so if I want to maximize gain I should not take the bet. However, taking the bet has a maximum loss of $50, while not taking the bet has a maximum loss of $200 (costing me my business), so in taking the bet I’ve gone from a 10% chance of losing my business to a 0% chance of losing my business. So if I want to minimize the probability of losing my business (all else equal) I should take the bet.