[EDITED substantially from initial state to fix some serious errors.]
If we try to do this properly with Bayes, here’s how it goes.Your odds ratio Pr(N) : Pr(E) presumably starts at 1:1 since the initial situation is symmetrical. Then it needs to be multiplied by Pr(N says 15% | N) : Pr(N says 15% | E) and then by Pr(E says 20% | N) | Pr(E says 20% | E).
Continuing to assume that we have no outside knowledge that would distinguish N from E (in reality we might; they’re quite different people; this might also change our prior odds ratio), we’d better have a single function giving Pr(say p | was yours) : Pr(say p | not yours), and then our posterior odds ratio is f(15%) / f(20%).
(And then Pr(E) is f(20%) / (f(15%) + f(20%)) and Eliezer should get that fraction of the $20.)
I don’t see any strong grounds for singling out one choice of f as The Right One. At least for comparably “round” choices of p, f should be increasing. It shouldn’t go all the way to 0:1 at p=0 or 1:0 at p=1, but for people as clueful about probabilities as E and N it should be pretty close at those extremes.
But, still, where does f come from? Presumably you have some idea of how carefully you counted your money (and some idea of how the other guy counted, which complicates matters a bit more), and the more carefully you counted (and weren’t aware that the $20 was yours) the more likely you are to give a low value of p, and also the more likely it is that the money really isn’t yours.
Toy model #1: there’s a continuum of counting procedures, each of which puts in an extra $20 with probability q for some 0 ⇐ q ⇐ 1, and you know in hindsight which you employed, and you give that value of q as your estimate. But that isn’t actually the right value of q; the right value depends on what the other person is expected to have done. We should think better of both N and E than to model them this way.
If you do this, though, what happens is that if the prior probability of error for each party is g (for “goof”) then f(q) = q(1-g) : (1-q)g, and then f(q1) / f(q2) = q1(1-q2) : (1-q1)q2. This is the same as option 4 in the article.
Toy model #2: there’s a continuum of counting procedures, as before, and which you use is chosen according to some pdf, the same pdf for both parties, and you know which procedure you used and what the pdf is. Then your Pr($20 is mine | q) is Pr(I goofed and the other guy didn’t | q, exactly one $20 extra) = q Pr(other guy didn’t goof) / (q Pr(other guy didn’t goof) + (1-q) Pr(other guy goofed)) so if the overall goof probability is g (this is all you need to know; the whole pdf is unnecessary) then for a given value of q, the probability you quote will be q(1-g) / [q(1-g)+(1-q)g]. Which means (scribble, scribble) that if you quote a probability p then your q is (necessarily, exactly) gp / ((1-g)-(1-2g)p). Which means, unless I’m confused which I might be because it’s after 1am local time and I know I already made one mistake in the first version of this comment, that when you say p1 and he says p2 the odds ratio is gp1 / ((1-g)-(1-2g)p1) : gp2 / ((1-g)-(1-2g)p2) = p1 ((1-g)-(1-2g)p2) : p2 ((1-g)-(1-2g)p1).
This depends on g, which is an unknown parameter. When g->0 we recover the same answer as in toy model #1. (When errors are very rare, the probability that the $20 is yours is basically the same as the probability that you goofed.) When g->1 the posterior odds approach 1:1. (When non-errors are very rare, Pr(N goofed) and Pr(E goofed) are both very close to 1, so their ratio is very close to 1, so once we know there was exactly one error it’s about equally likely to be either’s.) As g increases, the odds ratio becomes monotonically less unequal.
I don’t see any obvious principled way of estimating g, beyond the trivial observations that it shouldn’t be too small since an error happened in this case and it can’t be too large since N and E were both surprised by it.
If, like Coscott, you feel that when p2=1-p1 (i.e., the two parties agree on the probability that the money belongs to either of them) the posterior odds should be the same as you’d get from either (i.e., you should divide the money in the “obvious” proportions), then this is achieved for g=1/2 and for no other choice of g. For g=1/2, the posterior odds are simply p1 : p2. That’s the original Yudkowsky solution, #1 in the article, with which shminux agrees in comments here.
If, IIRC like someone in the original discussion, you feel that replacing p1,p2 with 1-p1,1-p2 should have the same effect as replacing them with p2,p1 -- i.e., if one party says “20% N” and the other says “15% E” it doesn’t matter which is which—then you must take either g=0 (reproducing answer 4) or g=1 (always splitting equally).
What’s wrong with toy model #2, aside from that annoying free parameter? A few things. Firstly, in reality people tend to be overconfident. (Maybe not smart bias-aware probability-fluent people talking explicitly about probabilities, but I wouldn’t bet on it.) This amounts to saying that we should move q1 and q2 towards 50% somewhat before doing the calculation, which will make the posterior odds less unequal. Exactly how much is anyone’s guess; it depends on our guess of the calibration curves for people like N and E in situations like this.
Secondly, you won’t really know your own procedure’s q exactly. You’ll have some uncertain estimate of it. If my scribblings are right then symmetrical uncertainty in q doesn’t actually change the posterior odds. Your uncertainty about q won’t really be symmetrical, even after accounting for miscalibration—e.g., if you think q=0.01 then you might be 0.02 too low but not 0.02 too high—but for moderate values like 0.15 or 0.20 symmetry’s probably a harmless assumption.
Thirdly, whatever your “internal” estimate of q it’ll get rounded somewhat, hence the nice round 15% and 20% figures N and E gave. This is probably also a fairly symmetrical affair for probabilities in this range, so again it probably doesn’t make much difference to the posterior odds.
On the basis of all of which, I’ll say:
Answer #4 in the article seems like an upper bound on how unequally the money can reasonably be distributed. If you either adopt the overoptimistic model 1, or take g to be tiny in model 2, and don’t allow for any overconfidence, then you get this answer.
There doesn’t seem to be any very obvious way to choose that free parameter g.
If you take the two parties’ estimates of the probability that they goofed as indicative and take g=(p1+p2)/2, which really isn’t a very principled choice but never mind, then in this case you get a division $20 = $8.35 + $11.65, just barely less unequal than the g=0 solution.
If you take g=0 you get answer #4 again. This is your only choice if you want only the two probability assignments to matter, and not who offers which.
If you take g=1/2 you get answer #1. This is your only choice if you want to divide the money in the ratio p:1-p when both parties agree that Pr(money is N’s) = p.
If you take g=1 you get equal division regardless of quoted probabilities.
Marcello’s answer (#2 in the article) is always less unequal than #4 and is nice and easy to calculate. It might be a good practical compromise for those not quite pragmatic enough to adopt the obvious “meh” solution I proposed elsewhere in comments.
Pragmatically yes, but if E is foolish enough to say it’s his with probability 1 there’s really still some chance that actually it’s N’s. (Suppose E says “yeah, I put that in”and N replies “Huh? I was 99% sure I put in an extra twenty”.)
[EDITED substantially from initial state to fix some serious errors.]
If we try to do this properly with Bayes, here’s how it goes.Your odds ratio Pr(N) : Pr(E) presumably starts at 1:1 since the initial situation is symmetrical. Then it needs to be multiplied by Pr(N says 15% | N) : Pr(N says 15% | E) and then by Pr(E says 20% | N) | Pr(E says 20% | E).
Continuing to assume that we have no outside knowledge that would distinguish N from E (in reality we might; they’re quite different people; this might also change our prior odds ratio), we’d better have a single function giving Pr(say p | was yours) : Pr(say p | not yours), and then our posterior odds ratio is f(15%) / f(20%).
(And then Pr(E) is f(20%) / (f(15%) + f(20%)) and Eliezer should get that fraction of the $20.)
I don’t see any strong grounds for singling out one choice of f as The Right One. At least for comparably “round” choices of p, f should be increasing. It shouldn’t go all the way to 0:1 at p=0 or 1:0 at p=1, but for people as clueful about probabilities as E and N it should be pretty close at those extremes.
But, still, where does f come from? Presumably you have some idea of how carefully you counted your money (and some idea of how the other guy counted, which complicates matters a bit more), and the more carefully you counted (and weren’t aware that the $20 was yours) the more likely you are to give a low value of p, and also the more likely it is that the money really isn’t yours.
Toy model #1: there’s a continuum of counting procedures, each of which puts in an extra $20 with probability q for some 0 ⇐ q ⇐ 1, and you know in hindsight which you employed, and you give that value of q as your estimate. But that isn’t actually the right value of q; the right value depends on what the other person is expected to have done. We should think better of both N and E than to model them this way.
If you do this, though, what happens is that if the prior probability of error for each party is g (for “goof”) then f(q) = q(1-g) : (1-q)g, and then f(q1) / f(q2) = q1(1-q2) : (1-q1)q2. This is the same as option 4 in the article.
Toy model #2: there’s a continuum of counting procedures, as before, and which you use is chosen according to some pdf, the same pdf for both parties, and you know which procedure you used and what the pdf is. Then your Pr($20 is mine | q) is Pr(I goofed and the other guy didn’t | q, exactly one $20 extra) = q Pr(other guy didn’t goof) / (q Pr(other guy didn’t goof) + (1-q) Pr(other guy goofed)) so if the overall goof probability is g (this is all you need to know; the whole pdf is unnecessary) then for a given value of q, the probability you quote will be q(1-g) / [q(1-g)+(1-q)g]. Which means (scribble, scribble) that if you quote a probability p then your q is (necessarily, exactly) gp / ((1-g)-(1-2g)p). Which means, unless I’m confused which I might be because it’s after 1am local time and I know I already made one mistake in the first version of this comment, that when you say p1 and he says p2 the odds ratio is gp1 / ((1-g)-(1-2g)p1) : gp2 / ((1-g)-(1-2g)p2) = p1 ((1-g)-(1-2g)p2) : p2 ((1-g)-(1-2g)p1).
This depends on g, which is an unknown parameter. When g->0 we recover the same answer as in toy model #1. (When errors are very rare, the probability that the $20 is yours is basically the same as the probability that you goofed.) When g->1 the posterior odds approach 1:1. (When non-errors are very rare, Pr(N goofed) and Pr(E goofed) are both very close to 1, so their ratio is very close to 1, so once we know there was exactly one error it’s about equally likely to be either’s.) As g increases, the odds ratio becomes monotonically less unequal.
I don’t see any obvious principled way of estimating g, beyond the trivial observations that it shouldn’t be too small since an error happened in this case and it can’t be too large since N and E were both surprised by it.
If, like Coscott, you feel that when p2=1-p1 (i.e., the two parties agree on the probability that the money belongs to either of them) the posterior odds should be the same as you’d get from either (i.e., you should divide the money in the “obvious” proportions), then this is achieved for g=1/2 and for no other choice of g. For g=1/2, the posterior odds are simply p1 : p2. That’s the original Yudkowsky solution, #1 in the article, with which shminux agrees in comments here.
If, IIRC like someone in the original discussion, you feel that replacing p1,p2 with 1-p1,1-p2 should have the same effect as replacing them with p2,p1 -- i.e., if one party says “20% N” and the other says “15% E” it doesn’t matter which is which—then you must take either g=0 (reproducing answer 4) or g=1 (always splitting equally).
What’s wrong with toy model #2, aside from that annoying free parameter? A few things. Firstly, in reality people tend to be overconfident. (Maybe not smart bias-aware probability-fluent people talking explicitly about probabilities, but I wouldn’t bet on it.) This amounts to saying that we should move q1 and q2 towards 50% somewhat before doing the calculation, which will make the posterior odds less unequal. Exactly how much is anyone’s guess; it depends on our guess of the calibration curves for people like N and E in situations like this.
Secondly, you won’t really know your own procedure’s q exactly. You’ll have some uncertain estimate of it. If my scribblings are right then symmetrical uncertainty in q doesn’t actually change the posterior odds. Your uncertainty about q won’t really be symmetrical, even after accounting for miscalibration—e.g., if you think q=0.01 then you might be 0.02 too low but not 0.02 too high—but for moderate values like 0.15 or 0.20 symmetry’s probably a harmless assumption.
Thirdly, whatever your “internal” estimate of q it’ll get rounded somewhat, hence the nice round 15% and 20% figures N and E gave. This is probably also a fairly symmetrical affair for probabilities in this range, so again it probably doesn’t make much difference to the posterior odds.
On the basis of all of which, I’ll say:
Answer #4 in the article seems like an upper bound on how unequally the money can reasonably be distributed. If you either adopt the overoptimistic model 1, or take g to be tiny in model 2, and don’t allow for any overconfidence, then you get this answer.
There doesn’t seem to be any very obvious way to choose that free parameter g.
If you take the two parties’ estimates of the probability that they goofed as indicative and take g=(p1+p2)/2, which really isn’t a very principled choice but never mind, then in this case you get a division $20 = $8.35 + $11.65, just barely less unequal than the g=0 solution.
If you take g=0 you get answer #4 again. This is your only choice if you want only the two probability assignments to matter, and not who offers which.
If you take g=1/2 you get answer #1. This is your only choice if you want to divide the money in the ratio p:1-p when both parties agree that Pr(money is N’s) = p.
If you take g=1 you get equal division regardless of quoted probabilities.
Marcello’s answer (#2 in the article) is always less unequal than #4 and is nice and easy to calculate. It might be a good practical compromise for those not quite pragmatic enough to adopt the obvious “meh” solution I proposed elsewhere in comments.
Was that second Pr() meant to be “Pr(N says 15% | E)”?
Yup. Will fix. Thanks! [EDITED: now fixed; thanks again.]
I am surprised you feel this way. If EY says, “Oh, I put that extra 20 in there,” and NB says he has no idea, then I think EY should get the 20 back.
Pragmatically yes, but if E is foolish enough to say it’s his with probability 1 there’s really still some chance that actually it’s N’s. (Suppose E says “yeah, I put that in”and N replies “Huh? I was 99% sure I put in an extra twenty”.)
E does also happen to be the author of this post. :-)
Quite so. (I wondered about linking to it from the word “foolish” but decided it wasn’t necessary.)