If the organization is risk-averse, it doesn’t want risk-neutral voters to gain influence. If it’s risk-neutral, then it should incorporate opportunity costs when judging projects in hindsight. Furthermore, if in hindsight a rejected project still appears to have had a high positive EV, the org should register the rejection of the project as a mistake.
Suppose the organisation is risk-neutral, and Charlie abstains from the sub-95% chance projects rather than rejecting them (in a large organisation that makes many decisions you can’t expect everyone to vote on everything). He also rejects the sub-5% projects.
By selectively only telling you what you already knew, Charlie builds up a reputation of being a good predictor, as opposed to David, who is far more often wrong but who is giving actual useful input.
Furthermore, if in hindsight a rejected project still appears to have had a high positive EV, the org should register the rejection of the project as a mistake.
This misses the heart of that criticism: mistakes have different magnitudes.
If the organization is risk-averse, it doesn’t want risk-neutral voters to gain influence. If it’s risk-neutral, then it should incorporate opportunity costs when judging projects in hindsight. Furthermore, if in hindsight a rejected project still appears to have had a high positive EV, the org should register the rejection of the project as a mistake.
Suppose the organisation is risk-neutral, and Charlie abstains from the sub-95% chance projects rather than rejecting them (in a large organisation that makes many decisions you can’t expect everyone to vote on everything). He also rejects the sub-5% projects.
By selectively only telling you what you already knew, Charlie builds up a reputation of being a good predictor, as opposed to David, who is far more often wrong but who is giving actual useful input.
This misses the heart of that criticism: mistakes have different magnitudes.