This is a stub of an idea, looking at “human in the loop”-type restrictions.
An assessor is a particular type of restriction. It involves a (generally) human agent, who is tasked with observing something (the behaviour of the agent, or maybe the expected outcome of its plans), and then deciding on whether to allow the AI to proceed or not.
Many of the methods suitable for preventing the hacking of restrictions can also be used to protect assessors. If you consider that the AI would have three options: “hold a gun to the assessor’s head”, “seduce the assessor” and “do what the assessor wants”, it’s clear that the first two options (the first more clearly than the second) are things that would a) be stupid to do if there were no assessor, and b) are behaviours that are very informative as to whether the assessor exists.
However, there are a few more steps we can take with assessors. First of all, the AI need not know which particular human will be an assessor, forcing it to take general human behaviour into account. This is more effective if the assessing process is not one that allows any real interactions between AI and assessor.
Indeed, the assessor need not be a human: it could be a superintelligent agent itself (a hypothetical one, if need be) that has its own motivation.
Lastly, since the assessors are agents and therefore can reach their own predictions, we could ask for the assessor’s impressions to be accurate predictions of what the AI intends to do (again, this could be done by having a hypothetical superintelligence assess the accuracy of the assessors’ expected predictions). We’ll look at ideas about modelling and model accuracy in a subsequent post.
Assessors that are hard to seduce
A putative new idea for AI control; index here.
This is a stub of an idea, looking at “human in the loop”-type restrictions.
An assessor is a particular type of restriction. It involves a (generally) human agent, who is tasked with observing something (the behaviour of the agent, or maybe the expected outcome of its plans), and then deciding on whether to allow the AI to proceed or not.
Many of the methods suitable for preventing the hacking of restrictions can also be used to protect assessors. If you consider that the AI would have three options: “hold a gun to the assessor’s head”, “seduce the assessor” and “do what the assessor wants”, it’s clear that the first two options (the first more clearly than the second) are things that would a) be stupid to do if there were no assessor, and b) are behaviours that are very informative as to whether the assessor exists.
However, there are a few more steps we can take with assessors. First of all, the AI need not know which particular human will be an assessor, forcing it to take general human behaviour into account. This is more effective if the assessing process is not one that allows any real interactions between AI and assessor.
Indeed, the assessor need not be a human: it could be a superintelligent agent itself (a hypothetical one, if need be) that has its own motivation.
Lastly, since the assessors are agents and therefore can reach their own predictions, we could ask for the assessor’s impressions to be accurate predictions of what the AI intends to do (again, this could be done by having a hypothetical superintelligence assess the accuracy of the assessors’ expected predictions). We’ll look at ideas about modelling and model accuracy in a subsequent post.