Perhaps abiguity aversion is merely a good heuristic.
Well of course. Finite ideal rational agents don’t exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:
Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcely. Thus, as it encounters new decision-theoretic problems, it often does not choose self-modification, instead clodging together old circuitry and/or answers to conserve compute cycles. And so when choosing answers to your 3 problems, it would fear solutions which, when repeating the answer multiple times, maximizes expected value in its environment, which includes its own source code.
Ambiguity aversion then would be commitment-risk aversion, where future compounded failures change the value of dollars per ulility. Upon each iteration of the problem, the value of a dollar can change, and if you don’t maximize minimum expected value, you may end up with betting all of your $100, which is worth infinite value to you, versus gaining $100, which is worth far less, even if you started with $1000.
We see this in ourselves all the time. If you make a decision, expect to be more likely to make the decision in the future, and if you change your lifestyle, expect it to be hard to change back, even if you later know that changing back is the deletion of a bias.
And if so, do we need a different framework that can capture a broader class of “rational” agents, including maximizers of minimum expected utility?
Rational agents have source code whose optimality is a function of their environments. There is no finite cross-domain Bayesian in compute-space; only in the design-space that includes environments.
Well of course. Finite ideal rational agents don’t exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:
Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcely. Thus, as it encounters new decision-theoretic problems, it often does not choose self-modification, instead clodging together old circuitry and/or answers to conserve compute cycles. And so when choosing answers to your 3 problems, it would fear solutions which, when repeating the answer multiple times, maximizes expected value in its environment, which includes its own source code.
Ambiguity aversion then would be commitment-risk aversion, where future compounded failures change the value of dollars per ulility. Upon each iteration of the problem, the value of a dollar can change, and if you don’t maximize minimum expected value, you may end up with betting all of your $100, which is worth infinite value to you, versus gaining $100, which is worth far less, even if you started with $1000.
We see this in ourselves all the time. If you make a decision, expect to be more likely to make the decision in the future, and if you change your lifestyle, expect it to be hard to change back, even if you later know that changing back is the deletion of a bias.
Rational agents have source code whose optimality is a function of their environments. There is no finite cross-domain Bayesian in compute-space; only in the design-space that includes environments.