I think this post is making a sharp distinction to what really is a continuum; any “intent aligned” AI becomes more safe and useful as you add more “common sense” and “do what I mean” capability to it, and at the limit of this process you get what I would interpret as alignment to the long term, implicit deep values (of the entity or entities the AI started out intent aligned to).
I realize other people might define “alignment to the long term, implicit deep values” differently, such that it would not be approached by such a process, but currently think they would be mistaken in desiring whatever different definition they have in mind. (Indeed, what they actually want is what they would get under sufficiently sophisticated intent alignment, pretty much by definition).
I think this post is making a sharp distinction to what really is a continuum; any “intent aligned” AI becomes more safe and useful as you add more “common sense” and “do what I mean” capability to it, and at the limit of this process you get what I would interpret as alignment to the long term, implicit deep values (of the entity or entities the AI started out intent aligned to).
I realize other people might define “alignment to the long term, implicit deep values” differently, such that it would not be approached by such a process, but currently think they would be mistaken in desiring whatever different definition they have in mind. (Indeed, what they actually want is what they would get under sufficiently sophisticated intent alignment, pretty much by definition).
P.S. I’m not endorsing intent alignment (for ASI) as applied to only an individual/group - I think intent alignment can be applied to humanity collectively.