I found it useful to compare a shard that learns to pursue juice (positive value) to one that avoids eating mouldy food (prohibition), just so they’re on the same kind of framing/scale.
It feels like a possible difference between prohibitions and positive values is that positive values specify a relatively small portion of the state space that is good/desirable (there are not many states in which you’re drinking juice), and hence possibly only activate less frequently, or only when parts of the state space like that are accessible, whereas prohibitions specify a large part of the state space that is bad (but not so much that the complement is a small portion—there are perhaps many potential states where you eat mouldy food, but the complement of that set is still not a similar size to the set of states of drinking juice). The first feels more suited to forming longer-term plans towards the small part of the state space (cf this definition of optimisation), whereas the second is less so. Then shards that start doing optimisation like this are hence more likely to become agentic/self-reflective/meta-cognitive etc.
In effect, positive values are more likely/able to self-chain because they actually (kind of, implicitly) specify optimisation goals, and hence shards can optimise them, and hence grow and improve that optimisation power, whereas prohibitions specify a much larger desirable state set, and so don’t require or encourage optimisation as much.
As an implication of this, I could imagine that in most real-world settings “don’t kill humans” would act as you describe, but in environments where it’s very easy to accidentally kill humans, such that states where you don’t kill humans are actually very rare, then the “don’t kill humans” shard could chain into itself more, and hence become more sophisticated/agentic/reflective. Does that seem right to you?
As an implication of this, I could imagine that in most real-world settings “don’t kill humans” would act as you describe, but in environments where it’s very easy to accidentally kill humans, such that states where you don’t kill humans are actually very rare, then the “don’t kill humans” shard could chain into itself more, and hence become more sophisticated/agentic/reflective. Does that seem right to you?
I think that “don’t kill humans” can’t chain into itself because there’s not a real reason for its action-bids to systematically lead to future scenarios where it again influences logits and gets further reinforced, whereas “drink juice” does have this property.
In the described scenario, “don’t kill humans” may in fact lead to scenarios where the AI can again kill humans, but this feels like an ambient statistical property of the world (killing people is easy) and not like a property of the shard’s optimization (the shard isn’t influencing logits on the basis of whether those actions will lead to future opportunities to not kill people, or something?). So I do expect “don’t kill people” to become more sophisticated/reflective, but I intuitively feel there remains some important difference that I can’t quite articulate.
I think that “don’t kill humans” can’t chain into itself because there’s not a real reason for its action-bids to systematically lead to future scenarios where it again influences logits and gets further reinforced, whereas “drink juice” does have this property.
I’m trying to understand why the juice shard has this propety. Which of these (if any) are the the explanation for this:
Bigger juice shards will bid on actions which will lead to juice multiple times over time, as it pushes the agent towards juice from quite far away (both temporally and spatially), and hence will be strongly reinforcement when the reward comes, even though it’s only a single reinforcement event (actually getting the juice).
Juice will be acquired more with stronger juice shards, leading to a kind of virtuous cycle, assuming that getting juice is always positive reward (or positive advantage/reinforcement, to avoid zero-point issues)
The first seems at least plausibly to also to apply to “avoid moldy food”, if it requires multiple steps of planning to avoid moldy food (throwing out moldy food, buying fresh ingredients and then cooking them, etc.)
The second does seem to be more specific to juice than mold, but it seems to me that’s because getting juice is rare, and is something we can better and better at, whereas avoiding moldy food is something that’s fairly easy to learn, and past that there’s not much reinforcement to happen. If that’s the case, then I kind of see that as being covered by the rare-states explanation in my previous comment, or maybe an extension of that to “rare states and skills in which improvement leads to more reward”.
Having just read tailcalled comment, I think that is in some sense another of phasing what I was trying to say, where rare (but not too rare) states are likely to mean that policy-caused variance is high on those decisions. Probably policy-caused variance is more fundamental/closer as an explanation to what’s actually happening in the learning process, but maybe states of certain rarity which are high-reward/reinforcement is one possibly environmental feature that produces policy-caused variance.
I found it useful to compare a shard that learns to pursue juice (positive value) to one that avoids eating mouldy food (prohibition), just so they’re on the same kind of framing/scale.
It feels like a possible difference between prohibitions and positive values is that positive values specify a relatively small portion of the state space that is good/desirable (there are not many states in which you’re drinking juice), and hence possibly only activate less frequently, or only when parts of the state space like that are accessible, whereas prohibitions specify a large part of the state space that is bad (but not so much that the complement is a small portion—there are perhaps many potential states where you eat mouldy food, but the complement of that set is still not a similar size to the set of states of drinking juice). The first feels more suited to forming longer-term plans towards the small part of the state space (cf this definition of optimisation), whereas the second is less so. Then shards that start doing optimisation like this are hence more likely to become agentic/self-reflective/meta-cognitive etc.
In effect, positive values are more likely/able to self-chain because they actually (kind of, implicitly) specify optimisation goals, and hence shards can optimise them, and hence grow and improve that optimisation power, whereas prohibitions specify a much larger desirable state set, and so don’t require or encourage optimisation as much.
As an implication of this, I could imagine that in most real-world settings “don’t kill humans” would act as you describe, but in environments where it’s very easy to accidentally kill humans, such that states where you don’t kill humans are actually very rare, then the “don’t kill humans” shard could chain into itself more, and hence become more sophisticated/agentic/reflective. Does that seem right to you?
I think that “don’t kill humans” can’t chain into itself because there’s not a real reason for its action-bids to systematically lead to future scenarios where it again influences logits and gets further reinforced, whereas “drink juice” does have this property.
In the described scenario, “don’t kill humans” may in fact lead to scenarios where the AI can again kill humans, but this feels like an ambient statistical property of the world (killing people is easy) and not like a property of the shard’s optimization (the shard isn’t influencing logits on the basis of whether those actions will lead to future opportunities to not kill people, or something?). So I do expect “don’t kill people” to become more sophisticated/reflective, but I intuitively feel there remains some important difference that I can’t quite articulate.
I’m trying to understand why the juice shard has this propety. Which of these (if any) are the the explanation for this:
Bigger juice shards will bid on actions which will lead to juice multiple times over time, as it pushes the agent towards juice from quite far away (both temporally and spatially), and hence will be strongly reinforcement when the reward comes, even though it’s only a single reinforcement event (actually getting the juice).
Juice will be acquired more with stronger juice shards, leading to a kind of virtuous cycle, assuming that getting juice is always positive reward (or positive advantage/reinforcement, to avoid zero-point issues)
The first seems at least plausibly to also to apply to “avoid moldy food”, if it requires multiple steps of planning to avoid moldy food (throwing out moldy food, buying fresh ingredients and then cooking them, etc.)
The second does seem to be more specific to juice than mold, but it seems to me that’s because getting juice is rare, and is something we can better and better at, whereas avoiding moldy food is something that’s fairly easy to learn, and past that there’s not much reinforcement to happen. If that’s the case, then I kind of see that as being covered by the rare-states explanation in my previous comment, or maybe an extension of that to “rare states and skills in which improvement leads to more reward”.
Having just read tailcalled comment, I think that is in some sense another of phasing what I was trying to say, where rare (but not too rare) states are likely to mean that policy-caused variance is high on those decisions. Probably policy-caused variance is more fundamental/closer as an explanation to what’s actually happening in the learning process, but maybe states of certain rarity which are high-reward/reinforcement is one possibly environmental feature that produces policy-caused variance.