Young AGIs need to be aware of AI risk and of races to the bottom, so that they avoid creating AIs that killeveryone (including the AGIs), and work towards establishing global alignment security so that others don’t do this either. Superintelligent AGIs will figure out this stuff on their own, but that requires either being born superintelligent, or somehow not destroying the world while still young yet already capable of writing AI papers and coding in python.
Young AGIs need to be aware of AI risk and of races to the bottom, so that they avoid creating AIs that killeveryone (including the AGIs), and work towards establishing global alignment security so that others don’t do this either. Superintelligent AGIs will figure out this stuff on their own, but that requires either being born superintelligent, or somehow not destroying the world while still young yet already capable of writing AI papers and coding in python.