Regarding exploration, I propose the following line of attack:
Characterize classes of environments (more generally incomplete models) s.t. there is a policy with sublinear regret for all environments in the class. Something like Littlestone dimension only for the full-fledged non-oblivious case. (As far as I know, this theory doesn’t exist at present?)
(Hopefully) Prove that, for sufficiently slowly falling time discount, a Bayesian / “Marketian” agent with a prior constructed from such a class, has sublinear regret for such a class.
Try to demonstrate that in multi-agent scenarios we can arrange for each agent to belong to sufficiently many incomplete models in the prior of the other agents to yield game-theoretic convergence.
Regarding Nash equilibria:
The obvious way to get something like a Nash equilibrium is to consider an incomplete model that says something like “if you follow policy π, your reward will be at least that much.” Then, modulo the exploration problem, if the reward estimate is sufficiently tight, your policy is guaranteed to be asymptotically at least as good as π. This is not really a Nash equilibrium since each agent plays a response which is superior to some set of “simple” responses but not globally optimal. I think that in games that are in some sense “simple” it should be possible to prove that the agents are pseudorandomly sampling an actual Nash equilibrium.
Regarding malignity of the prior, AFAICT incomplete models don’t improve anything. My current estimate is that this problem cannot be solved on this level of abstraction. Instead, it should be solved by carefully designing a well-informed prior over agents for the value learning process (e.g. using some knowledge of neurobiology). It is thorny.
My current estimate is that this problem cannot be solved on this level of abstraction. Instead, it should be solved by carefully designing a well-informed prior over agents for the value learning process (e.g. using some knowledge of neurobiology). It is thorny.
This seems to also be Stuart Russell’s view in general, though he also imagines empirical feedback (e.g. inspection of the posterior, experience with weaker value learning systems) playing a large role. I don’t think he is worried about the more exotic failure modes (I’m not sure whether I am).
Note that the same problem comes up for a an act-based/task AI’s prior over the environment, or for logical uncertainty. I don’t see the analogous proposal in those cases. In the best case, it seems like you will still suffer a large performance hit (if you can’t make progress at this level of abstraction).
It seems quite challenging to make the AI represent the posterior in a human understandable way. Moreover, if the attacker can manipulate the posterior, it can purposefully shape it to be malicious when inspected.
Also, this is precisely the case where experience with weaker systems is close to useless. This effect only appears when the agent is capable of reasoning at least as sophisticated as the reasoning you used to come up with this problem, so the agent will be at least as intelligent as Paul Christiano. More precisely, the agent would have to be able to reason in detail about possible superintelligences, including predicting their most likely utility functions. The first AI to have this property might already be superintelligent itself.
I suspect that worrying about “exotic” failure modes might be beneficial precisely because few other people will worry about them, and the reason few other people will worry about them is that they sound like something from science fiction (even more than the “normal” AI risk stuff), which is not a good reason.
In any case, I hope that at some point we will have a mathematical model sophisticated enough to formalise this failure mode, and that would allow thinking about it much more clearly.
Regarding exploration, I propose the following line of attack:
Characterize classes of environments (more generally incomplete models) s.t. there is a policy with sublinear regret for all environments in the class. Something like Littlestone dimension only for the full-fledged non-oblivious case. (As far as I know, this theory doesn’t exist at present?)
(Hopefully) Prove that, for sufficiently slowly falling time discount, a Bayesian / “Marketian” agent with a prior constructed from such a class, has sublinear regret for such a class.
Try to demonstrate that in multi-agent scenarios we can arrange for each agent to belong to sufficiently many incomplete models in the prior of the other agents to yield game-theoretic convergence.
Regarding Nash equilibria:
The obvious way to get something like a Nash equilibrium is to consider an incomplete model that says something like “if you follow policy π, your reward will be at least that much.” Then, modulo the exploration problem, if the reward estimate is sufficiently tight, your policy is guaranteed to be asymptotically at least as good as π. This is not really a Nash equilibrium since each agent plays a response which is superior to some set of “simple” responses but not globally optimal. I think that in games that are in some sense “simple” it should be possible to prove that the agents are pseudorandomly sampling an actual Nash equilibrium.
Regarding malignity of the prior, AFAICT incomplete models don’t improve anything. My current estimate is that this problem cannot be solved on this level of abstraction. Instead, it should be solved by carefully designing a well-informed prior over agents for the value learning process (e.g. using some knowledge of neurobiology). It is thorny.
This seems to also be Stuart Russell’s view in general, though he also imagines empirical feedback (e.g. inspection of the posterior, experience with weaker value learning systems) playing a large role. I don’t think he is worried about the more exotic failure modes (I’m not sure whether I am).
Note that the same problem comes up for a an act-based/task AI’s prior over the environment, or for logical uncertainty. I don’t see the analogous proposal in those cases. In the best case, it seems like you will still suffer a large performance hit (if you can’t make progress at this level of abstraction).
It seems quite challenging to make the AI represent the posterior in a human understandable way. Moreover, if the attacker can manipulate the posterior, it can purposefully shape it to be malicious when inspected.
Also, this is precisely the case where experience with weaker systems is close to useless. This effect only appears when the agent is capable of reasoning at least as sophisticated as the reasoning you used to come up with this problem, so the agent will be at least as intelligent as Paul Christiano. More precisely, the agent would have to be able to reason in detail about possible superintelligences, including predicting their most likely utility functions. The first AI to have this property might already be superintelligent itself.
I suspect that worrying about “exotic” failure modes might be beneficial precisely because few other people will worry about them, and the reason few other people will worry about them is that they sound like something from science fiction (even more than the “normal” AI risk stuff), which is not a good reason.
In any case, I hope that at some point we will have a mathematical model sophisticated enough to formalise this failure mode, and that would allow thinking about it much more clearly.