I’ve recently started to think about how nascent “hot mess” superintelligence can reflect on its own values and converge to something consistent. The simplest route to think about this, it seems to me, is model it like a process of resolving uncertainity of superintelligence about its own preferences.
Suppose an agent knows that it is an expected utility maximizer and is uncertain between two utility functions, U1 and U2, with assigned probabilities p1 and p2. The agent must choose between two actions, a1 and a2. Let’s say that the optimal decision for U1 is a1 and for U2 is a2. To maximize the expected value of p1U1+p2U2, the agent chooses a1. However, choosing a1 is also a decisive evidence in favor of U1, and therefore, the agent updates p1 to 1. This representation of uncertain preferences looks unsatisfactory because it quickly and predictably converges to only one utility function.
Does anyone know of a good model for uncertain preferences that can meet these criteria after some additions?
No weird updates on predictable behavior of the agent.
Controllable updates through something similar to human feedback. In general, there should be a specific class of events/observations in the environment that provide evidence for hypotheses about preferences, and everything else should not.
Preservation of value: if an agent believes it is a paperclip-maximizer with a 60% probability and a human flourishing maximizer with a 40% probability, and it doesn’t expect any information about its preferences in the future, it should not bet everything on “maximizing paperclips” and should save at least some chunk of the universe for human flourishing.
Convergence: in the best case scenario, resolution of uncertainty should lead to strong conclusions about preferences.
A “none of the above” hypothesis in the distribution—if an agent suspects that all hypotheses are not compatible with the evidence, it should prioritize the hypothesis “your best decision is to shut down and send your operators an error log.”
Nash bargaining (between different hypotheses about preferences) looks like something that is close to desirable properties but I am not sure, may be something better has already been developed.
[Question] How to model uncertainty about preferences?
I’ve recently started to think about how nascent “hot mess” superintelligence can reflect on its own values and converge to something consistent. The simplest route to think about this, it seems to me, is model it like a process of resolving uncertainity of superintelligence about its own preferences.
Suppose an agent knows that it is an expected utility maximizer and is uncertain between two utility functions, U1 and U2, with assigned probabilities p1 and p2. The agent must choose between two actions, a1 and a2. Let’s say that the optimal decision for U1 is a1 and for U2 is a2. To maximize the expected value of p1U1+p2U2, the agent chooses a1. However, choosing a1 is also a decisive evidence in favor of U1, and therefore, the agent updates p1 to 1. This representation of uncertain preferences looks unsatisfactory because it quickly and predictably converges to only one utility function.
Does anyone know of a good model for uncertain preferences that can meet these criteria after some additions?
No weird updates on predictable behavior of the agent.
Controllable updates through something similar to human feedback. In general, there should be a specific class of events/observations in the environment that provide evidence for hypotheses about preferences, and everything else should not.
Preservation of value: if an agent believes it is a paperclip-maximizer with a 60% probability and a human flourishing maximizer with a 40% probability, and it doesn’t expect any information about its preferences in the future, it should not bet everything on “maximizing paperclips” and should save at least some chunk of the universe for human flourishing.
Convergence: in the best case scenario, resolution of uncertainty should lead to strong conclusions about preferences.
A “none of the above” hypothesis in the distribution—if an agent suspects that all hypotheses are not compatible with the evidence, it should prioritize the hypothesis “your best decision is to shut down and send your operators an error log.”
Nash bargaining (between different hypotheses about preferences) looks like something that is close to desirable properties but I am not sure, may be something better has already been developed.