A system’s goals have to be some event that can be brought about.
This sounds like a potentially confusing level of simplification; a goal should be regarded as at least a way of comparing possible events.
When we’re talking about an artificial intelligence’s preferences, we’re talking about the things it tends to optimize for, not the things it ‘has in mind’ or the things it believes are its preferences.
Its behavior is what makes its goal important. But in a system designed to follow an explicitly specified goal, it does make sense to talk of its goal apart from its behavior. Even though its behavior will reflect its goal, the goal itself will reflect itself better.
If the goal is implemented as a part of the system, other parts of the system can store some information about the goal, certain summaries or inferences based on it. This information can be thought of as beliefs about the goal. And if the goal is not “logically transparent”, that is its specification is such that making concrete conclusions about what it states in particular cases is computationally expensive, then the system never knows what its goal says explicitly, it only ever has beliefs about particular aspects of the goal.
But in a system designed to follow an explicitly specified goal, it does make sense to talk of its goal apart from its behavior. Even though its behavior will reflect its goal, the goal itself will reflect itself better.
Perhaps, but I suspect that for most possible AIs there won’t always be a fact of the matter about where its preference is encoded. The blue-minimizing robot is a good example. If we treat it as a perfectly rational agent, then we might say that it has temporally stable preferences that are very complicated and conditional; or we might say that its preferences change at various times, and are partly encoded, for instance, in the properties of the color-inverting lens on its camera. An AGI’s response to environmental fluctuation will probably be vastly more complicated than a blue-minimizer’s, but the same sorts of problems arise in modeling it.
I think it’s more useful to think of rational-choice-theory-style preferences as useful theoretical constructs—like a system’s center of gravity, or its coherently extrapolated volition—than as real objects in the machine’s hardware or software. This sidesteps the problem of haggling over which exact preferences a system has, how those preferences are distributed over the environment, how to decide between causally redundant encodings which is ‘really’ the preference encoding, etc. See my response to Dave.
“Goal” is a natural idea for describing AIs with limited resources: these AIs won’t be able to make optimal decisions, and their decisions can’t be easily summarized in terms of some goal, but unlike the blue-minimizing robot they have a fixed preference ordering that doesn’t gradually drift away from what it was originally, and eventually they tend to get better at following it.
For example, if a goal is encrypted, and it takes a huge amount of computation to decrypt it, system’s behavior prior to that point won’t depend on the goal, but it’s going to work on decrypting it and eventually will follow it. This encrypted goal is probably more predictive of long-term consequences than anything else in the details of the original design, but it also doesn’t predict its behavior during the first stage (and if there is only a small probability that all resources in the universe will allow decrypting the goal, it’s probable that system’s behavior will never depend on the goal). Similarly, even if there is no explicit goal, as in the case of humans, it might be possible to work with an idealized goal that, like the encrypted goal, can’t be easily evaluated, and so won’t influence behavior for a long time.
My point is that there are natural examples where goals and the character of behavior don’t resemble each other, so that each can’t be easily inferred from the other, while both can be observed as aspects of the system. It’s useful to distinguish these ideas.
I agree preferences aren’t reducible to actual behavior. But I think they are reducible to dispositions to behave, i.e., behavior across counterfactual worlds. If a system prefers a specific event Z, that means that, across counterfactual environments you could have put it in, the future would on average have had more Z the more its specific distinguishing features had a large and direct causal impact on the world.
The examples I used seem to apply to “dispositions” to behave, in the same way (I wasn’t making this distinction). There are settings where the goal can’t be clearly inferred from behavior, or collection of hypothetical behaviors in response to various environments, at least if we keep environments relatively close to what might naturally occur, even as in those settings the goal can be observed “directly” (defined as an idealization based in AI’s design).
An AI with encypted goal (i.e. the AI itself doesn’t know the goal in explicit form, but the goal can be abstractly defined as the result of decryption) won’t behave in accordance with it in any environment that doesn’t magically let it decrypt its goal quickly, there is no tendency to push the events towards what the encrypted goal specifies, until the goal is decrypted (which might be never with high probability).
I don’t think a sufficiently well-encrypted ‘preference’ should be counted as a preference for present purposes. In principle, you can treat any physical chunk of matter as an ‘encrypted preference’, because if the AI just were a key of exactly the right shape, then it could physically interact with the lock in question to acquire a new optimization target. But if neither the AI nor anything very similar to the AI in nearby possible worlds actually acts as a key of the requisite sort, then we should treat the parts of the world that a distant AI could interact with to acquire a preference as, in our world, mere window dressing.
Perhaps if we actually built a bunch of AIs, and one of them was just like the others except where others of its kind had a preference module, it had a copy of The Wind in the Willows, we would speak of this new AI as having an ‘encrypted preference’ consisting of a book, with no easy way to treat that book as a decision criterion like its brother- and sister-AIs do for their homologous components. But I don’t see any reason right now to make our real-world usage of the word ‘preference’ correspond to that possible world’s usage. It’s too many levels of abstraction away from what we should be worried about, which are the actual real-world effects different AI architectures would have.
This sounds like a potentially confusing level of simplification; a goal should be regarded as at least a way of comparing possible events.
Its behavior is what makes its goal important. But in a system designed to follow an explicitly specified goal, it does make sense to talk of its goal apart from its behavior. Even though its behavior will reflect its goal, the goal itself will reflect itself better.
If the goal is implemented as a part of the system, other parts of the system can store some information about the goal, certain summaries or inferences based on it. This information can be thought of as beliefs about the goal. And if the goal is not “logically transparent”, that is its specification is such that making concrete conclusions about what it states in particular cases is computationally expensive, then the system never knows what its goal says explicitly, it only ever has beliefs about particular aspects of the goal.
Perhaps, but I suspect that for most possible AIs there won’t always be a fact of the matter about where its preference is encoded. The blue-minimizing robot is a good example. If we treat it as a perfectly rational agent, then we might say that it has temporally stable preferences that are very complicated and conditional; or we might say that its preferences change at various times, and are partly encoded, for instance, in the properties of the color-inverting lens on its camera. An AGI’s response to environmental fluctuation will probably be vastly more complicated than a blue-minimizer’s, but the same sorts of problems arise in modeling it.
I think it’s more useful to think of rational-choice-theory-style preferences as useful theoretical constructs—like a system’s center of gravity, or its coherently extrapolated volition—than as real objects in the machine’s hardware or software. This sidesteps the problem of haggling over which exact preferences a system has, how those preferences are distributed over the environment, how to decide between causally redundant encodings which is ‘really’ the preference encoding, etc. See my response to Dave.
“Goal” is a natural idea for describing AIs with limited resources: these AIs won’t be able to make optimal decisions, and their decisions can’t be easily summarized in terms of some goal, but unlike the blue-minimizing robot they have a fixed preference ordering that doesn’t gradually drift away from what it was originally, and eventually they tend to get better at following it.
For example, if a goal is encrypted, and it takes a huge amount of computation to decrypt it, system’s behavior prior to that point won’t depend on the goal, but it’s going to work on decrypting it and eventually will follow it. This encrypted goal is probably more predictive of long-term consequences than anything else in the details of the original design, but it also doesn’t predict its behavior during the first stage (and if there is only a small probability that all resources in the universe will allow decrypting the goal, it’s probable that system’s behavior will never depend on the goal). Similarly, even if there is no explicit goal, as in the case of humans, it might be possible to work with an idealized goal that, like the encrypted goal, can’t be easily evaluated, and so won’t influence behavior for a long time.
My point is that there are natural examples where goals and the character of behavior don’t resemble each other, so that each can’t be easily inferred from the other, while both can be observed as aspects of the system. It’s useful to distinguish these ideas.
I agree preferences aren’t reducible to actual behavior. But I think they are reducible to dispositions to behave, i.e., behavior across counterfactual worlds. If a system prefers a specific event Z, that means that, across counterfactual environments you could have put it in, the future would on average have had more Z the more its specific distinguishing features had a large and direct causal impact on the world.
The examples I used seem to apply to “dispositions” to behave, in the same way (I wasn’t making this distinction). There are settings where the goal can’t be clearly inferred from behavior, or collection of hypothetical behaviors in response to various environments, at least if we keep environments relatively close to what might naturally occur, even as in those settings the goal can be observed “directly” (defined as an idealization based in AI’s design).
An AI with encypted goal (i.e. the AI itself doesn’t know the goal in explicit form, but the goal can be abstractly defined as the result of decryption) won’t behave in accordance with it in any environment that doesn’t magically let it decrypt its goal quickly, there is no tendency to push the events towards what the encrypted goal specifies, until the goal is decrypted (which might be never with high probability).
I don’t think a sufficiently well-encrypted ‘preference’ should be counted as a preference for present purposes. In principle, you can treat any physical chunk of matter as an ‘encrypted preference’, because if the AI just were a key of exactly the right shape, then it could physically interact with the lock in question to acquire a new optimization target. But if neither the AI nor anything very similar to the AI in nearby possible worlds actually acts as a key of the requisite sort, then we should treat the parts of the world that a distant AI could interact with to acquire a preference as, in our world, mere window dressing.
Perhaps if we actually built a bunch of AIs, and one of them was just like the others except where others of its kind had a preference module, it had a copy of The Wind in the Willows, we would speak of this new AI as having an ‘encrypted preference’ consisting of a book, with no easy way to treat that book as a decision criterion like its brother- and sister-AIs do for their homologous components. But I don’t see any reason right now to make our real-world usage of the word ‘preference’ correspond to that possible world’s usage. It’s too many levels of abstraction away from what we should be worried about, which are the actual real-world effects different AI architectures would have.