I’m not sure how much I expect something like deceptive alignment from just scaling up LLMs. My guess would be that in some limit this gets AGI and also something deceptively aligned by default, but in reality we end up taking a shorter path to AGI, which also leads to deceptive alignment by default. However, I can think about the LLM AGI in this comment, and I don’t think it changes much.
The main reason I expect something like deceptive alignment is that we are expecting the thing to actually be really powerful, and so it actually needs to be coherent over both long sequences of actions and into new distributions it wasn’t trained on. It seems pretty unlikely to end up in a world where we train for the AI to act aligned for one distribution and then it generalizes exactly as we would like in a very different distribution. Or at least that it generalizes the things that we care about it generalizing, for example:
Don’t seek power, but do gain resources enough to do the difficult task
Don’t manipulate humans, but do tell them useful things and get them to sign off on your plans
I don’t think I agree to your counters to the specific arguments about deceptive alignment:
For Quinton’s post, I think Steve Byrnes’ comment is a good approximation of my views here
I don’t fully know how to think about the counting arguments when things aren’t crispy divided, and I do agree that the LLM AGI would likely be a big mess with no separable “world model”, “objective”, or “planning engine”. However it really isn’t clear to me that this makes the case weaker; the AI will be doing something powerful (by assumption) and its not clear that it will do the powerful thing that we want.
I really strongly agree that the NN prior can be really hard to reason about. It really seems to me like this again makes the situation worse; we might have a really hard time thinking about how the big powerful AI will generalize when we ask it to do powerful stuff.
I think a lot of this comes from some intuition like “Condition on the AI being powerful enough to do the crazy stuff we’ll be asking of an AGI. If it is this capable but doesn’t generalize the task in the exact way you want then you get something that looks like deceptive alignment.”
Not being able to think about the NN prior really just seems to make things harder, because we don’t know how it will generalize things we tell it.
Conditional on the AI never doing something like: manipulating/deceiving[1] the humans such that the humans think the AI is aligned, such that the AI can later do things the humans don’t like, then I am much more optimistic about the whole situation.
I’m not sure how much I expect something like deceptive alignment from just scaling up LLMs. My guess would be that in some limit this gets AGI and also something deceptively aligned by default, but in reality we end up taking a shorter path to AGI, which also leads to deceptive alignment by default. However, I can think about the LLM AGI in this comment, and I don’t think it changes much.
The main reason I expect something like deceptive alignment is that we are expecting the thing to actually be really powerful, and so it actually needs to be coherent over both long sequences of actions and into new distributions it wasn’t trained on. It seems pretty unlikely to end up in a world where we train for the AI to act aligned for one distribution and then it generalizes exactly as we would like in a very different distribution. Or at least that it generalizes the things that we care about it generalizing, for example:
Don’t seek power, but do gain resources enough to do the difficult task
Don’t manipulate humans, but do tell them useful things and get them to sign off on your plans
I don’t think I agree to your counters to the specific arguments about deceptive alignment:
For Quinton’s post, I think Steve Byrnes’ comment is a good approximation of my views here
I don’t fully know how to think about the counting arguments when things aren’t crispy divided, and I do agree that the LLM AGI would likely be a big mess with no separable “world model”, “objective”, or “planning engine”. However it really isn’t clear to me that this makes the case weaker; the AI will be doing something powerful (by assumption) and its not clear that it will do the powerful thing that we want.
I really strongly agree that the NN prior can be really hard to reason about. It really seems to me like this again makes the situation worse; we might have a really hard time thinking about how the big powerful AI will generalize when we ask it to do powerful stuff.
I think a lot of this comes from some intuition like “Condition on the AI being powerful enough to do the crazy stuff we’ll be asking of an AGI. If it is this capable but doesn’t generalize the task in the exact way you want then you get something that looks like deceptive alignment.”
Not being able to think about the NN prior really just seems to make things harder, because we don’t know how it will generalize things we tell it.
Conditional on the AI never doing something like: manipulating/deceiving[1] the humans such that the humans think the AI is aligned, such that the AI can later do things the humans don’t like, then I am much more optimistic about the whole situation.
The AI could be on some level not “aware” that it was deceiving the humans, a la Deep Deceptiveness.