I was using it rather broadly, considering situations where a smart AI is used to oversee another AI, and this is a key part of the approach. I wouldn’t usually include safety by debate or input checking, though I might include safety by debate if there was a smart AI overseer of the process that was doing important interventions.
I was using it rather broadly, considering situations where a smart AI is used to oversee another AI, and this is a key part of the approach. I wouldn’t usually include safety by debate or input checking, though I might include safety by debate if there was a smart AI overseer of the process that was doing important interventions.
In that case, I don’t see why the problem of “system alignment” or “supervisor alignment” is any simpler or easier than “supervisee alignment”.