I agree… seems to me like the most likely course of action to happen is for some researchers to develop human+ / fast capability gain AI that is dedicated to the problem of solving AI Alignment, and its that AI which develops the solution that can be implemented for aligning the inevitable AGI.
I agree… seems to me like the most likely course of action to happen is for some researchers to develop human+ / fast capability gain AI that is dedicated to the problem of solving AI Alignment, and its that AI which develops the solution that can be implemented for aligning the inevitable AGI.