My mental model is that there is an entire space of possible AIs, each with some capability level and alignability level. Given the state of the alignment field, there is some alignability ceiling, below which we can reliably align AIs. Right now, this ceiling is very low, but we can push it higher over time.
At some capability level, the AI is powerful enough to solve alignment of a more capable AI, which can then solve alignment for even more capable AI, etc all the way up. However, even the most alignable AI capable of this is still potentially very hard to align. There will of course be more alignable and less capable AIs too, but they will not be capable enough to actually kick off this bucket chain.
Then the key question is whether there will exist an AI that is both alignable and capable enough to start the bucket chain. This is a function of both (a) the shape of the space of AIs (how quickly do models become unalignable as they become more capable?) and (b) how good we become at solving alignment. Opinions differ on this—my personal opinion is that probably this first AI is pretty hard to align, so we’re pretty screwed, though it’s still worth a try.
I wish you wouldn’t use the term “align” if actually just mean “safely use” or you would make it clear that we don’t necessarily need alignment. E.g. because we could apply something like control (perhaps combined with paying AIs for their labor like normal employees).
My mental model is that there is an entire space of possible AIs, each with some capability level and alignability level. Given the state of the alignment field, there is some alignability ceiling, below which we can reliably align AIs. Right now, this ceiling is very low, but we can push it higher over time.
At some capability level, the AI is powerful enough to solve alignment of a more capable AI, which can then solve alignment for even more capable AI, etc all the way up. However, even the most alignable AI capable of this is still potentially very hard to align. There will of course be more alignable and less capable AIs too, but they will not be capable enough to actually kick off this bucket chain.
Then the key question is whether there will exist an AI that is both alignable and capable enough to start the bucket chain. This is a function of both (a) the shape of the space of AIs (how quickly do models become unalignable as they become more capable?) and (b) how good we become at solving alignment. Opinions differ on this—my personal opinion is that probably this first AI is pretty hard to align, so we’re pretty screwed, though it’s still worth a try.
I wish you wouldn’t use the term “align” if actually just mean “safely use” or you would make it clear that we don’t necessarily need alignment. E.g. because we could apply something like control (perhaps combined with paying AIs for their labor like normal employees).
Sorry for the word policing.