A seems like more of a theorem than an axiom: We are trying to maximize our estimate of human preferences, so upward errors are going to have more effect on the outcome than downward errors. Therefore, in case of doubt, guess low. And if we keep track of a probability distribution over candidate estimates rather than a point estimate, A dissolves like the Fermi paradox.
If we are good at maximizing, we will bring about whatever situation we judged best. If somewhere across the many possible situations to judge we make an extremely upward error, we will bring about that situation and end up with that random outcome. If the error is downward, we will only fail to bring about that situation, which is an opportunity cost if that was one of the better outcomes but not a disaster.
A seems like more of a theorem than an axiom: We are trying to maximize our estimate of human preferences, so upward errors are going to have more effect on the outcome than downward errors. Therefore, in case of doubt, guess low. And if we keep track of a probability distribution over candidate estimates rather than a point estimate, A dissolves like the Fermi paradox.
Interesting take; I’ll consider it more...
4 years later: Do you agree now?
I believe I do.
Could you explain this step
If we are good at maximizing, we will bring about whatever situation we judged best. If somewhere across the many possible situations to judge we make an extremely upward error, we will bring about that situation and end up with that random outcome. If the error is downward, we will only fail to bring about that situation, which is an opportunity cost if that was one of the better outcomes but not a disaster.