[Question] When do alignment researchers retire?

At what point will it no longer be useful for humans to be involved in the process of alignment research? After the first slightly-superhuman AGI, well into superintelligence, or somewhere in between?

Feel free to answer differently for different kinds of human involvement:

  • Humans could be involved as a source of data about human values

  • Humans could be involved as a red-team, trying to get evidence of misalignment or to verify the trustworthiness of systems

  • Humans could be involved in setting the broad research agenda, delegating to the AGIs

  • Humans could be involved in complementing the technical weaknesses of the AGIs, helping them in some way to research new alignment methods

What do you envision we are doing between AGI and superintelligence?