I’ve been looking into the AI alignment problem last couple of days and came up with the following summary of what problems there are and why. Also, I’d prefer using the umbrella name of Human alignment problem, as AI alignment is just a subset of it.
The problem is that we don’t know what we want.
And even if we individually knew, we couldn’t agree with others. (opinion aggregation)
Even if we agreed with others what we want, it would be hard to implement it. (coordination)
Because of game theory, momentum, and because of disagreements about how to do it.
Maybe we can create something smarter than us that solves these problems.
But we don’t know how to create something smarter than us.
Maybe we can create something that will start out dumber, but can learn and will eventually become smarter.
We are afraid that something like this could become very powerful very quickly, and it’s likely to kill us—either as a mere side-effect or because of conflicting goals. (AI alignment problem)
But we don’t know how to describe what it should learn. (outer alignment)
So maybe we can just give examples of what we know we want it to learn. (ML training)
We could make it learn from humans, but data from humans are inconsistent.
But it’s impossible to describe all the cases, so in practice the situations facing the ML model will be quite different. (distributional shift)
And if what we want the ML model to learn is very specific and complicated, it’s quite likely that what the model learns will behave very differently outside of our examples than how we’d want it to. (inner alignment)
It will also be hard to distinguish the cases where it does and where it doesn’t do what we want. (eliciting latent knowledge)
Generally, sufficiently capable ML models are hard to understand. (interpretability)
Especially if the model knows it can do more of the stuff from training when we are not looking. (deceptive mesa optimizers)
Also, if we realize it’s doing something else than what we wanted it to, it might be hard to change it, because you’d be interfering with its learned goals. (corrigibility)
This is just a summary of my current understanding of the problem landscape. I don’t subscribe to the stated motivations and conclusions, but more about that some other time.
Please, let me know if I omitted or misrepresented some important aspect of the problem (given how simplified version it intends to be).
My summary of the alignment problem
Link post
I’ve been looking into the AI alignment problem last couple of days and came up with the following summary of what problems there are and why. Also, I’d prefer using the umbrella name of Human alignment problem, as AI alignment is just a subset of it.
The problem is that we don’t know what we want.
And even if we individually knew, we couldn’t agree with others. (opinion aggregation)
Even if we agreed with others what we want, it would be hard to implement it. (coordination)
Because of game theory, momentum, and because of disagreements about how to do it.
Maybe we can create something smarter than us that solves these problems.
But we don’t know how to create something smarter than us.
Maybe we can create something that will start out dumber, but can learn and will eventually become smarter.
We are afraid that something like this could become very powerful very quickly, and it’s likely to kill us—either as a mere side-effect or because of conflicting goals. (AI alignment problem)
But we don’t know how to describe what it should learn. (outer alignment)
So maybe we can just give examples of what we know we want it to learn. (ML training)
We could make it learn from humans, but data from humans are inconsistent.
But it’s impossible to describe all the cases, so in practice the situations facing the ML model will be quite different. (distributional shift)
And if what we want the ML model to learn is very specific and complicated, it’s quite likely that what the model learns will behave very differently outside of our examples than how we’d want it to. (inner alignment)
It will also be hard to distinguish the cases where it does and where it doesn’t do what we want. (eliciting latent knowledge)
Generally, sufficiently capable ML models are hard to understand. (interpretability)
Especially if the model knows it can do more of the stuff from training when we are not looking. (deceptive mesa optimizers)
Also, if we realize it’s doing something else than what we wanted it to, it might be hard to change it, because you’d be interfering with its learned goals. (corrigibility)
This is just a summary of my current understanding of the problem landscape. I don’t subscribe to the stated motivations and conclusions, but more about that some other time.
Please, let me know if I omitted or misrepresented some important aspect of the problem (given how simplified version it intends to be).