There’s no analogous alignment well to slide into.
If one made a series of alignment-through-capabilities-shift tasks, you would get one.
I.e., you make a training set of scenarios where a system gets a lot smarter and has to preserve alignment through that capability shift.
Of course, making such a training set is not easy(!).
If one made a series of alignment-through-capabilities-shift tasks, you would get one.
I.e., you make a training set of scenarios where a system gets a lot smarter and has to preserve alignment through that capability shift.
Of course, making such a training set is not easy(!).