No one knows how to align AI: No one can precisely instruct AI to align with complex human values or happiness.
OTOH, alignment/control is part of functionality, and any AI that does something useful, that;’s commercially viable, must be reasonably responsive to it’s users wishes: so all commercially viable AIs, such as the GPT’s are aligned at a good-enough level.
The inevitable response to that is whats good enough for a not quite human level AI is not good enough for an superintelligence … which presupposes that the ASI is going to emerge suddenly and./or unexpectedly from the AHLI … in other words, that the ASI is not going to emerge from incremental improvements to both the capabilities and alignment of the seed AI.
OTOH, alignment/control is part of functionality, and any AI that does something useful, that;’s commercially viable, must be reasonably responsive to it’s users wishes: so all commercially viable AIs, such as the GPT’s are aligned at a good-enough level.
The inevitable response to that is whats good enough for a not quite human level AI is not good enough for an superintelligence … which presupposes that the ASI is going to emerge suddenly and./or unexpectedly from the AHLI … in other words, that the ASI is not going to emerge from incremental improvements to both the capabilities and alignment of the seed AI.
And that, of course, is not obvious either.