I want Alice to have help choosing what things to do and not do, in the form of easily understandable prices that turn uncertain badness (“it’ll probably be fine, I probably won’t break the camera”) into certain costs (“hmm, am I really going to get $70 worth of value from using this camera?”).
I am most interested in this in contexts where self-insurance is not reasonable to expect. Like, if some satellite company / government agency causes Kessler Syndrome, they’re not going to be able to pay back the rest of the Earth on their own, and so there’s some temptation to just ignore that outcome; “we’ll be bankrupt anyway.” But society as a whole very much does not want them to ignore that outcome; society wants avoiding that outcome to be more important to them than the survival of their company, and something like apocalypse insurance seems like the right way to go about that.
But how do you price the apocalypse insurance? You don’t want to just kick the can down the road, where now the insurance company is trying to look good to regulators while being cheap enough for customers to get business, reasoning “well, we’ll be bankrupt anyway” about the catastrophe happening.
You mention the “unilateralist’s curse”, but this sounds more like the “auction winner’s curse”,
I think those are very similar concepts, to the point of often being the same.
which I would expect an insurer to already be taking into account when setting their prices (as that’s the insurer’s entire core competency).
I probably should have brought up the inexploitability concept from Inadequate Equilbria; I’m arguing that mistaken premiums are inexploitable, because Carol can’t make any money from correcting Bob’s mistaken belief about Alice, and I want a mechanism to make it exploitable.
Normally insurers just learn from bad bets after the fact and this is basically fine, from society’s point of view; when we’re insuring catastrophic risks (and using insurance premiums to determine whether or not to embark on those risks) I think it’s worth trying to make the market exploitable.
If you buy $1 of synthetic risk for $0.05, does that mean you get $1.00 if Alice breaks the camera, and $0.00 if Alice does not?
Yes, the synthetic risk paying out is always conditional. The sketch I have for that example is Bob has to offer $10 of synthetic risk at each percentage point, except I did the math as tho it were continuous, which you can also do by just choosing midpoints. So there’s $10 for sale at $0.55, another $10 for $0.65, and so on; Carol’s $40 for $2.80 comes from buying $0.55+0.65+0.75+0.85 (and she doesn’t buy the $0.95 one because it looks like a 5 cent loss to her). That is, your tentative guess looks right to me.
The $910 that goes unsold is still held by Bob, so if the camera is wrecked Bob has to pay themselves $910, which doesn’t matter.
As you point out, Bob pays $1.25 for the first $50 of risk, which ends up being a wash. Does that just break the whole scheme, since Bob could just buy all the required synthetic risk and replicate the two-party insurance market? Well… maybe. Maybe you need a tiny sales tax, or something, but I think Bob is incentivized to participate in the market. Why did we need to require it, then? I don’t have a good answer there. (Maybe it’s easier to have mandatory prediction markets than just legalizing them.)
I probably should have brought up the inexploitability concept from Inadequate Equilbria; I’m arguing that mistaken premiums are inexploitable, because Carol can’t make any money from correcting Bob’s mistaken belief about Alice, and I want a mechanism to make it exploitable.
Ah, this clarifies the intent.
As you point out, Bob pays $1.25 for the first $50 of risk, which ends up being a wash. Does that just break the whole scheme, since Bob could just buy all the required synthetic risk and replicate the two-party insurance market?
As stated, I think so, but I think that’s a pretty easy fix. I think it works if, any time Bob is going to offer such an insurance policy to Alice at a certain rate, Bob must also offer at least 1x as much synthetic risk for sale under this scheme for all amounts higher than that rate to any other insurer in the market. I’m not sure whether there’s directly any precedent for forcing something to be sold at a certain price to one party as a condition for selling it to another party, though it rhymes with right of first refusal.
One note is that, by forcing Bob to sell risk to Carol, you’re also creating a “cause Alice to break the camera” bounty for Carol. At a 1X multiplier and nontrivial probabilities of loss, that bounty is probably not a very powerful force, but it’s something to be aware of if scaling to much higher multipliers.
In any case, cool mechanism!
Edit: Also, if your concern is that Bob probably doesn’t have the ability to pay out if Alice breaks the camera, forcing Bob to collect additional money now from Carol in exchange for owing even more money if Alice breaks the camera doesn’t necessarily help. Maybe if you make Bob put all of the risk that Carol bought into escrow though? Does put a certain floor on the cost of insurance that has less to do with risk and more to do with interest rates and Carol’s willingness to lose small amounts of money in expectation to lock up the liquidity of her competitor. Again seems unlikely to be much of an issue in practice for >1% chances of insurance paying out and a 1X multiplier.
I want Alice to have help choosing what things to do and not do, in the form of easily understandable prices that turn uncertain badness (“it’ll probably be fine, I probably won’t break the camera”) into certain costs (“hmm, am I really going to get $70 worth of value from using this camera?”).
I am most interested in this in contexts where self-insurance is not reasonable to expect. Like, if some satellite company / government agency causes Kessler Syndrome, they’re not going to be able to pay back the rest of the Earth on their own, and so there’s some temptation to just ignore that outcome; “we’ll be bankrupt anyway.” But society as a whole very much does not want them to ignore that outcome; society wants avoiding that outcome to be more important to them than the survival of their company, and something like apocalypse insurance seems like the right way to go about that.
But how do you price the apocalypse insurance? You don’t want to just kick the can down the road, where now the insurance company is trying to look good to regulators while being cheap enough for customers to get business, reasoning “well, we’ll be bankrupt anyway” about the catastrophe happening.
I think those are very similar concepts, to the point of often being the same.
I probably should have brought up the inexploitability concept from Inadequate Equilbria; I’m arguing that mistaken premiums are inexploitable, because Carol can’t make any money from correcting Bob’s mistaken belief about Alice, and I want a mechanism to make it exploitable.
Normally insurers just learn from bad bets after the fact and this is basically fine, from society’s point of view; when we’re insuring catastrophic risks (and using insurance premiums to determine whether or not to embark on those risks) I think it’s worth trying to make the market exploitable.
Yes, the synthetic risk paying out is always conditional. The sketch I have for that example is Bob has to offer $10 of synthetic risk at each percentage point, except I did the math as tho it were continuous, which you can also do by just choosing midpoints. So there’s $10 for sale at $0.55, another $10 for $0.65, and so on; Carol’s $40 for $2.80 comes from buying $0.55+0.65+0.75+0.85 (and she doesn’t buy the $0.95 one because it looks like a 5 cent loss to her). That is, your tentative guess looks right to me.
The $910 that goes unsold is still held by Bob, so if the camera is wrecked Bob has to pay themselves $910, which doesn’t matter.
As you point out, Bob pays $1.25 for the first $50 of risk, which ends up being a wash. Does that just break the whole scheme, since Bob could just buy all the required synthetic risk and replicate the two-party insurance market? Well… maybe. Maybe you need a tiny sales tax, or something, but I think Bob is incentivized to participate in the market. Why did we need to require it, then? I don’t have a good answer there. (Maybe it’s easier to have mandatory prediction markets than just legalizing them.)
Ah, this clarifies the intent.
As stated, I think so, but I think that’s a pretty easy fix. I think it works if, any time Bob is going to offer such an insurance policy to Alice at a certain rate, Bob must also offer at least 1x as much synthetic risk for sale under this scheme for all amounts higher than that rate to any other insurer in the market. I’m not sure whether there’s directly any precedent for forcing something to be sold at a certain price to one party as a condition for selling it to another party, though it rhymes with right of first refusal.
One note is that, by forcing Bob to sell risk to Carol, you’re also creating a “cause Alice to break the camera” bounty for Carol. At a 1X multiplier and nontrivial probabilities of loss, that bounty is probably not a very powerful force, but it’s something to be aware of if scaling to much higher multipliers.
In any case, cool mechanism!
Edit: Also, if your concern is that Bob probably doesn’t have the ability to pay out if Alice breaks the camera, forcing Bob to collect additional money now from Carol in exchange for owing even more money if Alice breaks the camera doesn’t necessarily help. Maybe if you make Bob put all of the risk that Carol bought into escrow though? Does put a certain floor on the cost of insurance that has less to do with risk and more to do with interest rates and Carol’s willingness to lose small amounts of money in expectation to lock up the liquidity of her competitor. Again seems unlikely to be much of an issue in practice for >1% chances of insurance paying out and a 1X multiplier.