“Goal” is a natural idea for describing AIs with limited resources: these AIs won’t be able to make optimal decisions, and their decisions can’t be easily summarized in terms of some goal, but unlike the blue-minimizing robot they have a fixed preference ordering that doesn’t gradually drift away from what it was originally, and eventually they tend to get better at following it.
For example, if a goal is encrypted, and it takes a huge amount of computation to decrypt it, system’s behavior prior to that point won’t depend on the goal, but it’s going to work on decrypting it and eventually will follow it. This encrypted goal is probably more predictive of long-term consequences than anything else in the details of the original design, but it also doesn’t predict its behavior during the first stage (and if there is only a small probability that all resources in the universe will allow decrypting the goal, it’s probable that system’s behavior will never depend on the goal). Similarly, even if there is no explicit goal, as in the case of humans, it might be possible to work with an idealized goal that, like the encrypted goal, can’t be easily evaluated, and so won’t influence behavior for a long time.
My point is that there are natural examples where goals and the character of behavior don’t resemble each other, so that each can’t be easily inferred from the other, while both can be observed as aspects of the system. It’s useful to distinguish these ideas.
I agree preferences aren’t reducible to actual behavior. But I think they are reducible to dispositions to behave, i.e., behavior across counterfactual worlds. If a system prefers a specific event Z, that means that, across counterfactual environments you could have put it in, the future would on average have had more Z the more its specific distinguishing features had a large and direct causal impact on the world.
The examples I used seem to apply to “dispositions” to behave, in the same way (I wasn’t making this distinction). There are settings where the goal can’t be clearly inferred from behavior, or collection of hypothetical behaviors in response to various environments, at least if we keep environments relatively close to what might naturally occur, even as in those settings the goal can be observed “directly” (defined as an idealization based in AI’s design).
An AI with encypted goal (i.e. the AI itself doesn’t know the goal in explicit form, but the goal can be abstractly defined as the result of decryption) won’t behave in accordance with it in any environment that doesn’t magically let it decrypt its goal quickly, there is no tendency to push the events towards what the encrypted goal specifies, until the goal is decrypted (which might be never with high probability).
I don’t think a sufficiently well-encrypted ‘preference’ should be counted as a preference for present purposes. In principle, you can treat any physical chunk of matter as an ‘encrypted preference’, because if the AI just were a key of exactly the right shape, then it could physically interact with the lock in question to acquire a new optimization target. But if neither the AI nor anything very similar to the AI in nearby possible worlds actually acts as a key of the requisite sort, then we should treat the parts of the world that a distant AI could interact with to acquire a preference as, in our world, mere window dressing.
Perhaps if we actually built a bunch of AIs, and one of them was just like the others except where others of its kind had a preference module, it had a copy of The Wind in the Willows, we would speak of this new AI as having an ‘encrypted preference’ consisting of a book, with no easy way to treat that book as a decision criterion like its brother- and sister-AIs do for their homologous components. But I don’t see any reason right now to make our real-world usage of the word ‘preference’ correspond to that possible world’s usage. It’s too many levels of abstraction away from what we should be worried about, which are the actual real-world effects different AI architectures would have.
“Goal” is a natural idea for describing AIs with limited resources: these AIs won’t be able to make optimal decisions, and their decisions can’t be easily summarized in terms of some goal, but unlike the blue-minimizing robot they have a fixed preference ordering that doesn’t gradually drift away from what it was originally, and eventually they tend to get better at following it.
For example, if a goal is encrypted, and it takes a huge amount of computation to decrypt it, system’s behavior prior to that point won’t depend on the goal, but it’s going to work on decrypting it and eventually will follow it. This encrypted goal is probably more predictive of long-term consequences than anything else in the details of the original design, but it also doesn’t predict its behavior during the first stage (and if there is only a small probability that all resources in the universe will allow decrypting the goal, it’s probable that system’s behavior will never depend on the goal). Similarly, even if there is no explicit goal, as in the case of humans, it might be possible to work with an idealized goal that, like the encrypted goal, can’t be easily evaluated, and so won’t influence behavior for a long time.
My point is that there are natural examples where goals and the character of behavior don’t resemble each other, so that each can’t be easily inferred from the other, while both can be observed as aspects of the system. It’s useful to distinguish these ideas.
I agree preferences aren’t reducible to actual behavior. But I think they are reducible to dispositions to behave, i.e., behavior across counterfactual worlds. If a system prefers a specific event Z, that means that, across counterfactual environments you could have put it in, the future would on average have had more Z the more its specific distinguishing features had a large and direct causal impact on the world.
The examples I used seem to apply to “dispositions” to behave, in the same way (I wasn’t making this distinction). There are settings where the goal can’t be clearly inferred from behavior, or collection of hypothetical behaviors in response to various environments, at least if we keep environments relatively close to what might naturally occur, even as in those settings the goal can be observed “directly” (defined as an idealization based in AI’s design).
An AI with encypted goal (i.e. the AI itself doesn’t know the goal in explicit form, but the goal can be abstractly defined as the result of decryption) won’t behave in accordance with it in any environment that doesn’t magically let it decrypt its goal quickly, there is no tendency to push the events towards what the encrypted goal specifies, until the goal is decrypted (which might be never with high probability).
I don’t think a sufficiently well-encrypted ‘preference’ should be counted as a preference for present purposes. In principle, you can treat any physical chunk of matter as an ‘encrypted preference’, because if the AI just were a key of exactly the right shape, then it could physically interact with the lock in question to acquire a new optimization target. But if neither the AI nor anything very similar to the AI in nearby possible worlds actually acts as a key of the requisite sort, then we should treat the parts of the world that a distant AI could interact with to acquire a preference as, in our world, mere window dressing.
Perhaps if we actually built a bunch of AIs, and one of them was just like the others except where others of its kind had a preference module, it had a copy of The Wind in the Willows, we would speak of this new AI as having an ‘encrypted preference’ consisting of a book, with no easy way to treat that book as a decision criterion like its brother- and sister-AIs do for their homologous components. But I don’t see any reason right now to make our real-world usage of the word ‘preference’ correspond to that possible world’s usage. It’s too many levels of abstraction away from what we should be worried about, which are the actual real-world effects different AI architectures would have.