You are the great commander of many robotic soldiers. Each soldier has two robotic kidneys. In war time they frequently need another one. If they don’t, they break (100%). If they do, the donor might break (1%).
If all soldiers were equally good at war and equally willing to give one kidney, there would be little to discuss. But war is not that simple.
In your army 1⁄10 are 2 times better than the median, 1⁄100 are 4 times better than the median, 1/1000 are 16 times better than the median, 1/10000 are 256 times better than the median, etc.
If « better » would only meant « better at war », there would be little to discuss. But systematic winning is not that simple.
In your army better also means « more likely to give a kidney » and « more likely to set an example for the others to follow ». Which means that, conditional on this robotic soldier wants to give a robotic kidney, it’s also more likely to be critical to war effort and more likely to set an example that the less systematic winners will follow. Oh well.
At this point, my brain want to retreat to heuristics like « Let’s assume Scott already compute that [1]», but that sounds sloppy. What’s your utilitarian model?
Continuing the pattern of distribution of “better”-ness, 1⁄100,000 are 65536 times better than the median, and 1⁄1,000,000 are 4,294,967,276 times better than the median. If you have more than 10,000,000 soldiers then you likely have one that is 10^19 times better.
So the elites are the only ones that are meaningful for fighting your war. If the base rate of kidney donation is nonzero, they also immediately donate both their kidneys and die due to being 10^19 times more likely to donate kidneys. So the optimal strategy is to ensure that the base rate of kidney donation is zero.
(I only thought for 1 minute.)
This argument seems valid for large number of soldiers (~100,000). But when numbers are small, a different strategy should dominate. Perhaps forcing a uniform distribution of donating kidneys (by randomly forcing a solider to donate their kidney) could work better.
(The actual question is about your best utilitarian model, not your strategy given my model.)
Uniform distribution of donating kidney sounds also the result when a donor is 10^19 more likely to set the example. Maybe I should precise that the donor is unlikely to take the 1% risk unless someone else is more critical to war effort.
Good laugh! But they’re also 10^19 times more likely to get the difference between donating one kidney and donating both.