Typo fixed now. Jill’s payment should be p_Jill = 300 - p_Jack.
The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn’t do it for them. The “bid and split excess” mechanism I mention at the very end could be better if people are occasionally honest.
I’m now curious what’s possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It’d be fairly easy to calculate the potential welfare gain by adding a flag to the agent’s type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it’d be painfully obvious that you’d get a smaller payoff for saying you’re helpful under the direct mechanism.
Typo fixed now. Jill’s payment should be p_Jill = 300 - p_Jack.
The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn’t do it for them. The “bid and split excess” mechanism I mention at the very end could be better if people are occasionally honest.
I’m now curious what’s possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It’d be fairly easy to calculate the potential welfare gain by adding a flag to the agent’s type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it’d be painfully obvious that you’d get a smaller payoff for saying you’re helpful under the direct mechanism.