Your comment seems absolutely right, I have no idea where the whole ‘turn itself off’ thing came from.
I doubt diminishing returns would come into effect. Examples like Graham’s number and Conway Chain Arrow notation seem to be strong evidence that the task of ‘store the biggest number possible’ does not run into diminishing returns but instead achieves accelerating returns of truly mind-boggling proportions.
However, I have to admit that I think the whole idea is rubbish. The main problem is that the author is confusing two different tasks “maximise the extent to which the future meets my future preferences” and “maximise the extent to which the future meets my current preferences”.
To explain what I mean more rigorously, suppose we have an AI with a utility function U0, which is considering whether or not it should alter its utility function to a new function U1. It extrapolates possible futures and deduces that if it sticks with U0 the universe will end up in state A, whereas if it switches to U1 the universe will end up in state B, (e.g. if U0 is paper-clip maximising then A contains a lot of paper-clips).
“Maximise the extent to which the future meets my future preferences” means it will switch if and only if U1(B) > U0(A)
As the article points out, it is very easy to find a U1 which meets this criterion, simply define U1(x) = U0(x) + 1 (actions are unaffected by affine transforms of utility functions so B=A for this choice of U1).
“Maximise the extent to which the future meets my current preferences” means it will switch if and only if U0(B) > U0(A)
This criterion is much more demanding, for example U1(x) = U0(x) + 1 clearly no longer works.
I suspect that for most internally consistent utility functions this criterion is impossible to satisfy (thought experiment; is there any utility function a paper-clip maximiser could switch to which would result in a universe containing more paper-clips?).
Even if I am wrong about it being mostly impossible, it is not an especially worrying problem. I would have no problem with an FAI switching to a new utility function which was even more friendly than the one we gave it.
Of course, you could program an AI to do either of the tasks, but there are a number of reasons why I consider the second to be better. Firstly, for all the reasons the article gives, it is more likely to do whatever you wanted it to do. Secondly it is more general since the former can be given as a special case of the latter.
The article’s mistake is right there in the title, it fails to break out of the rather anthropomorphic reward/punishment mode of thinking.
thought experiment; is there any utility function a paper-clip maximiser could switch to which would result in a universe containing more paper-clips?
Sort of. For most utility functions, there are transformations that could be applied which make them more efficient to evaluate without changing their value, such as compiler optimizations, which it will definitely want to apply. It’s also a good idea to modify the utility function for any inputs where it is computationally intractable, to replace it with an approximation (probably with a penalty to represent the uncertainty).
thought experiment; is there any utility function a paper-clip maximiser could switch to which would result in a universe containing more paper-clips?
Yes. Suppose the paperclip maximizer inhabits the same universe as a bobby-pin maximizer. The two agents interact in a cooperative game which has a (Nash) bargaining solution that provides more of both desirable artifacts than either player could achieve without cooperating. It is well known that cooperative play can be explained as a kind of utilitarianism—both players act so as to maximize a linear combination of their original utility functions. If the two agents have access to each other’s source code, and if the only way for them to enforce the bargain is to both self-modify so as to each maximize the new joint utility function, then they both gain by doing so.
The problem is that if the universe changes, and/or their understanding of the universe changes, one or both of the agents may come to regret the modification—there may be a new bargain—better for one or both parties, that is no longer achievable after they self-modified. So, irrevocable self-modification may be a bad idea in the long term. But it can sometimes be a good idea in the short term.
An easier way to see this point is to simply notice that to make a promise is to (in some sense) self-modify your utility function. And, under certain circumstances, it is rational to make a promise with the intent of keeping it.
Your comment seems absolutely right, I have no idea where the whole ‘turn itself off’ thing came from.
Suzanne is proposing that that’s (essentially) what happens to wireheads when they finger their reward signal—they collapse in an ecstatic heap.
In reality, there are, of course, other types of wirehead behaviour to consider. The heroin addict doesn’t exactly collapse in a corner when looking for their next fix.
Your comment seems absolutely right, I have no idea where the whole ‘turn itself off’ thing came from.
I doubt diminishing returns would come into effect. Examples like Graham’s number and Conway Chain Arrow notation seem to be strong evidence that the task of ‘store the biggest number possible’ does not run into diminishing returns but instead achieves accelerating returns of truly mind-boggling proportions.
However, I have to admit that I think the whole idea is rubbish. The main problem is that the author is confusing two different tasks “maximise the extent to which the future meets my future preferences” and “maximise the extent to which the future meets my current preferences”.
To explain what I mean more rigorously, suppose we have an AI with a utility function U0, which is considering whether or not it should alter its utility function to a new function U1. It extrapolates possible futures and deduces that if it sticks with U0 the universe will end up in state A, whereas if it switches to U1 the universe will end up in state B, (e.g. if U0 is paper-clip maximising then A contains a lot of paper-clips).
“Maximise the extent to which the future meets my future preferences” means it will switch if and only if U1(B) > U0(A)
As the article points out, it is very easy to find a U1 which meets this criterion, simply define U1(x) = U0(x) + 1 (actions are unaffected by affine transforms of utility functions so B=A for this choice of U1).
“Maximise the extent to which the future meets my current preferences” means it will switch if and only if U0(B) > U0(A)
This criterion is much more demanding, for example U1(x) = U0(x) + 1 clearly no longer works.
I suspect that for most internally consistent utility functions this criterion is impossible to satisfy (thought experiment; is there any utility function a paper-clip maximiser could switch to which would result in a universe containing more paper-clips?).
Even if I am wrong about it being mostly impossible, it is not an especially worrying problem. I would have no problem with an FAI switching to a new utility function which was even more friendly than the one we gave it.
Of course, you could program an AI to do either of the tasks, but there are a number of reasons why I consider the second to be better. Firstly, for all the reasons the article gives, it is more likely to do whatever you wanted it to do. Secondly it is more general since the former can be given as a special case of the latter.
The article’s mistake is right there in the title, it fails to break out of the rather anthropomorphic reward/punishment mode of thinking.
Sort of. For most utility functions, there are transformations that could be applied which make them more efficient to evaluate without changing their value, such as compiler optimizations, which it will definitely want to apply. It’s also a good idea to modify the utility function for any inputs where it is computationally intractable, to replace it with an approximation (probably with a penalty to represent the uncertainty).
Fair point, I didn’t think of that. The point still kind-of stands though, since neither of those modifications should produce any drastic change.
Yes. Suppose the paperclip maximizer inhabits the same universe as a bobby-pin maximizer. The two agents interact in a cooperative game which has a (Nash) bargaining solution that provides more of both desirable artifacts than either player could achieve without cooperating. It is well known that cooperative play can be explained as a kind of utilitarianism—both players act so as to maximize a linear combination of their original utility functions. If the two agents have access to each other’s source code, and if the only way for them to enforce the bargain is to both self-modify so as to each maximize the new joint utility function, then they both gain by doing so.
The problem is that if the universe changes, and/or their understanding of the universe changes, one or both of the agents may come to regret the modification—there may be a new bargain—better for one or both parties, that is no longer achievable after they self-modified. So, irrevocable self-modification may be a bad idea in the long term. But it can sometimes be a good idea in the short term.
An easier way to see this point is to simply notice that to make a promise is to (in some sense) self-modify your utility function. And, under certain circumstances, it is rational to make a promise with the intent of keeping it.
Eeek! As I may have previously mentioned, you are planning on putting way more stuff in there than is a good idea, IMHO.
Suzanne is proposing that that’s (essentially) what happens to wireheads when they finger their reward signal—they collapse in an ecstatic heap.
In reality, there are, of course, other types of wirehead behaviour to consider. The heroin addict doesn’t exactly collapse in a corner when looking for their next fix.