Would CIRL with many human agents realistically model our world?
What does AI alignment mean with respect to many humans with different goals? Are we implicitly assuming (with all our current agendas) that the final model of AGI is to being corrigible with one human instructor?
How do we synthesize goals of so many human agents into one utility function? Are we assuming solving alignment with one supervisor is easier? Wouldn’t having many supervisors restrict the space meaningfully?
Would CIRL with many human agents realistically model our world?
What does AI alignment mean with respect to many humans with different goals? Are we implicitly assuming (with all our current agendas) that the final model of AGI is to being corrigible with one human instructor?
How do we synthesize goals of so many human agents into one utility function? Are we assuming solving alignment with one supervisor is easier? Wouldn’t having many supervisors restrict the space meaningfully?