Its failure mode, though, is that it don’t preclude, for instance, a probabilistic mix of extreme optimised policy with a random inefficient one.
I think there is a more serious failure mode here.
If a AI wants to keep a utility function within a certain range, what’s to stop it from dramatically increasing it’s own intelligence, access to resources, ect towards infinitely just to increase the probability of staying within that range in the future from 99.9% up to 99.999%? You still might run into the same “instrumental goals” problem.
I’d call that a mix of extreme optimised policy with inefficiency (not in the exact technical sense, but informally).
There’s nothing to stop the agent from doing that, but it’s also not required. This is expected utility we’re talking about, so “expected utility in the range 0.8-1” is achieved—with certainty—by a policy that has a 90% probability of achieving 1 utility (and 10% of achieving 0). You may say there’s also a tiny chance of the AI’s estimates being wrong, its sensors, its probability calculation… but all that would just be absorbed into, say, a 89% chance of success.
In a sense, this was the hope for the satisficer—that it would do a half-assed effort. But it can choose to do a optimal maximising policy instead. This type of agent can also choose a maximising-style policy, but mix it with deliberate inefficiency. ie it isn’t really any better.
Ah, interesting, I understand better now what you’re saying. That makes more sense, thank you.
Here’s another possible failure mode then; if the AI’s goal is just to manipulate it’s own expected utility, and it calculates expected utility using some Bayesian method of modifying priors with new information, could it selectively seek out new information to convince itself that what it was already going to do is going to have an expected utility in the range of .8 and game the system that way? I know that sounds strange but humans do stuff like that all the time.
I can’t bias its information search (looking for evidence for X rather than evidence against it), but it can play on the variance.
Suppose you want to have a belief in X in the 0.4 to 0.6 range, and there’s a video tape that would clear the matter up completely. Then not watching the video is a good move! If you currently have a belief of 0.3, then you can’t bias your video watching, but you could get an idiot to watch the video and recount it vaguely to you; then you might end up with a higher chance (say 20%) of being in the 0.4 to 0.6 range.
You can’t look for evidence of X rather than evidence against, in the sense of conservation of expected evidence. But this just means that the amount of the move multiplied by the probability will be equal. It does not mean that the probability of finding evidence in favor is 50% and the probability of finding evidence against is 50%. So in that sense, you can indeed look for evidence in favor, by looking in places that have a very high probability of evidence in favor and low probability of evidence against; it is just that if you unluckily happen to find evidence against, it will be extremely strong evidence against.
Yes. I call that informally playing the variance. You want to look somewhere where there is the highest probability of a swing into the range that you want.
If it’s capable of self-modifying, then it could do weirder things.
For example, let’s say the AI knows that news source X will almost always push stories in favor of action Y. (Fox News will almost always push information that supports the argument we should bomb the middle east, The Guardian will almost always push information that supports us becoming more socailist, whatever.) If the AI wants to bias itself in favor of thinking that action Y will create more utility, what if it self-modifies to first convince itself that news source X is a much more reliable source of information then it actually is and to weigh that information more heavily in it’s future analysis?
If it can’t self-modify directly, it could maybe do tricky things involving only observing the desired information source at key moments with the goal of increasing it’s own confidence in that information source, and then once it has modified it’s own confidence sufficiently then it looks at that information source to find the information it is looking for.
(Again, this sounds crazy, but keep in mind humans do this stuff to themselves all the time.)
Ect. Basically what this all boils down to is the AI doesn’t really care about what happens in the real world, it’s not trying to actually accomplish a goal; instead it’s primary objective is to make itself think that it has an 80% chance of accomplishing the goal (or whatever), and once it does that it doesn’t really matter if the goal happens or not. It has a built in motivation to try to trick itself.
If it’s capable of self-modifying, then it could do weirder things.
Yes, but much weirder than you’re imagining :-) This agent design is highly unstable, and will usually self-modify into something else entirely, very fast (see the top example where it self-modifies into a non-transitive agent).
If the AI wants to bias itself in favor of [...], what if it self-modifies to first convince itself that news source [...]
What is the expected utility from that bias action (given the expected behaviour after)? The AI has to make that bias decision while not being biased. So this doesn’t get round the conservation of expected evidence.
I think there is a more serious failure mode here.
If a AI wants to keep a utility function within a certain range, what’s to stop it from dramatically increasing it’s own intelligence, access to resources, ect towards infinitely just to increase the probability of staying within that range in the future from 99.9% up to 99.999%? You still might run into the same “instrumental goals” problem.
I’d call that a mix of extreme optimised policy with inefficiency (not in the exact technical sense, but informally).
There’s nothing to stop the agent from doing that, but it’s also not required. This is expected utility we’re talking about, so “expected utility in the range 0.8-1” is achieved—with certainty—by a policy that has a 90% probability of achieving 1 utility (and 10% of achieving 0). You may say there’s also a tiny chance of the AI’s estimates being wrong, its sensors, its probability calculation… but all that would just be absorbed into, say, a 89% chance of success.
In a sense, this was the hope for the satisficer—that it would do a half-assed effort. But it can choose to do a optimal maximising policy instead. This type of agent can also choose a maximising-style policy, but mix it with deliberate inefficiency. ie it isn’t really any better.
Ah, interesting, I understand better now what you’re saying. That makes more sense, thank you.
Here’s another possible failure mode then; if the AI’s goal is just to manipulate it’s own expected utility, and it calculates expected utility using some Bayesian method of modifying priors with new information, could it selectively seek out new information to convince itself that what it was already going to do is going to have an expected utility in the range of .8 and game the system that way? I know that sounds strange but humans do stuff like that all the time.
I can’t bias its information search (looking for evidence for X rather than evidence against it), but it can play on the variance.
Suppose you want to have a belief in X in the 0.4 to 0.6 range, and there’s a video tape that would clear the matter up completely. Then not watching the video is a good move! If you currently have a belief of 0.3, then you can’t bias your video watching, but you could get an idiot to watch the video and recount it vaguely to you; then you might end up with a higher chance (say 20%) of being in the 0.4 to 0.6 range.
You can’t look for evidence of X rather than evidence against, in the sense of conservation of expected evidence. But this just means that the amount of the move multiplied by the probability will be equal. It does not mean that the probability of finding evidence in favor is 50% and the probability of finding evidence against is 50%. So in that sense, you can indeed look for evidence in favor, by looking in places that have a very high probability of evidence in favor and low probability of evidence against; it is just that if you unluckily happen to find evidence against, it will be extremely strong evidence against.
Yes. I call that informally playing the variance. You want to look somewhere where there is the highest probability of a swing into the range that you want.
If it’s capable of self-modifying, then it could do weirder things.
For example, let’s say the AI knows that news source X will almost always push stories in favor of action Y. (Fox News will almost always push information that supports the argument we should bomb the middle east, The Guardian will almost always push information that supports us becoming more socailist, whatever.) If the AI wants to bias itself in favor of thinking that action Y will create more utility, what if it self-modifies to first convince itself that news source X is a much more reliable source of information then it actually is and to weigh that information more heavily in it’s future analysis?
If it can’t self-modify directly, it could maybe do tricky things involving only observing the desired information source at key moments with the goal of increasing it’s own confidence in that information source, and then once it has modified it’s own confidence sufficiently then it looks at that information source to find the information it is looking for.
(Again, this sounds crazy, but keep in mind humans do this stuff to themselves all the time.)
Ect. Basically what this all boils down to is the AI doesn’t really care about what happens in the real world, it’s not trying to actually accomplish a goal; instead it’s primary objective is to make itself think that it has an 80% chance of accomplishing the goal (or whatever), and once it does that it doesn’t really matter if the goal happens or not. It has a built in motivation to try to trick itself.
Yes, but much weirder than you’re imagining :-) This agent design is highly unstable, and will usually self-modify into something else entirely, very fast (see the top example where it self-modifies into a non-transitive agent).
What is the expected utility from that bias action (given the expected behaviour after)? The AI has to make that bias decision while not being biased. So this doesn’t get round the conservation of expected evidence.