I believe you have a typo in this section with the subscripts:
If the sum of the two reports θJack + θJill is at least $300, the room will be painted. Jack’s payment will be pJack = 300 − θJill if the room is painted and zero otherwise. Similarly, Jill’s payment will be pJill = 300 − θJill when the room is painted and zero otherwise.
If Jill reports between $150 and $180, the room is painted and Jack gets a payoff of 120 − (300 − θJill) = θJill − 180 < 0, less than his payoff from honesty where the room isn’t painted.
When the room is painted, Jack and Jill have a $300 bill, but the mechanism only collects 300 − θJack + 300 − θJill < 300.
If pJack = 300 − θJill and pJill = 300 - θJill, then the mechanism should be collecting 600 − 2*θJill?
So it turns out that the reason none of the solutions I tried in the last article worked is because it’s impossible for solutions to exist that satisfy the implicit restrictions I was putting on solutions. That’s interesting. It’s too bad that the second-best solution is so relatively crappy. Realistically, it doesn’t seem to be any better than just negotiating with someone in the more traditional manner, and accepting that you may not end up revealing your true preferences.
Typo fixed now. Jill’s payment should be p_Jill = 300 - p_Jack.
The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn’t do it for them. The “bid and split excess” mechanism I mention at the very end could be better if people are occasionally honest.
I’m now curious what’s possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It’d be fairly easy to calculate the potential welfare gain by adding a flag to the agent’s type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it’d be painfully obvious that you’d get a smaller payoff for saying you’re helpful under the direct mechanism.
I believe you have a typo in this section with the subscripts:
If pJack = 300 − θJill and pJill = 300 - θJill, then the mechanism should be collecting 600 − 2*θJill?
So it turns out that the reason none of the solutions I tried in the last article worked is because it’s impossible for solutions to exist that satisfy the implicit restrictions I was putting on solutions. That’s interesting. It’s too bad that the second-best solution is so relatively crappy. Realistically, it doesn’t seem to be any better than just negotiating with someone in the more traditional manner, and accepting that you may not end up revealing your true preferences.
Typo fixed now. Jill’s payment should be p_Jill = 300 - p_Jack.
The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn’t do it for them. The “bid and split excess” mechanism I mention at the very end could be better if people are occasionally honest.
I’m now curious what’s possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It’d be fairly easy to calculate the potential welfare gain by adding a flag to the agent’s type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it’d be painfully obvious that you’d get a smaller payoff for saying you’re helpful under the direct mechanism.