I have a pretty boring terminological comment: as far as I understand, what you’re proposing is not a method of aligning (“directing”) AI systems, but for opposing them.
I have a pretty boring terminological comment: as far as I understand, what you’re proposing is not a method of aligning (“directing”) AI systems, but for opposing them.