Even if BigCo senior management were virtuous and benevolent, and their workers were loyal and did not game the rules, the poor rules would still cause problems.
If BigCo senior management were virtuous and benevolent, would they have poor rules?
That is to say, when I put my Confucian hat on, the whole system of selecting managers based on a proxy measure that’s gameable feels too Legalist. [The actual answer to my question is “getting rid of poor rules would be a low priority, because the poor rules wouldn’t impede righteous conduct, but they still would try to get rid of them.”]
Like, if I had to point at the difference between the two, the difference is where the put the locus of value. The Confucian ruler is primarily focused on making the state good, and surrounding himself with people who are primarily focused on making the state good. The Legalist ruler is primarily focused on surviving and thriving, and so tries to set up systems that cause people who are primarily focused on surviving and thriving to do the right thing. The Confucian imagines that you can have a large shared value; the Legalist imagines that you will necessarily have many disconnected and contradictory values.
The difference between hiring for regular companies and EA orgs seems relevant. Often, applicants for regular companies want the job, and standard practice is to attempt to trick the company into hiring them, regardless of qualification. Often, applicants for EA orgs only want the job if and only if they’re the right person for the job; if I’m trying to prevent asteroids from hitting the Earth (or w/e) and someone else could do a better job of it than I could, I very much want to get out of their way and have them do it instead of me. As you mention in the post, this just means you get rid of the part of interviews where gaming is intentional, and significant difficulty remains. [Like, people will be honest about their weaknesses and try to be honest about their strengths, but accurately measuring those and fit with the existing team remains quite difficult.]
Now, where they’re trying to put the locus of value doesn’t mean their policy prescriptions are helpful. As I understand the Confucian focus on virtue in the leader, the main value is that it’s really hard to have subordinates who are motivated by the common good if you yourself are selfish (both because they won’t have your example and because the people who are motivated by the common good will find it difficult to be motivated by working for you).
But I find myself feeling some despair at the prospect of a purely Legalist approach to AI Alignment, because it feels like it is fighting against the AI at every step, instead of being able to recruit it to do some of the work for you, and without that last bit I’m not sure how you get extrapolation instead of interpolation. Like, you can trust the Confucian to do the right thing in novel territory, insofar as you gave them the right underlying principles, and the Confucian is operating at a philosophical level where you can give them concepts like corrigibility (where they not only want to accept correction from you, but also want to preserve their ability to accept correction for you, and preserve their preservation of that ability, and so on) and the map-territory distinction (where they want their sensors to be honest, because in order to have lots of strawberries they need their strawberry-counter to be accurate instead of inaccurate). In Legalism, the hope is that the overseer can stay a step ahead of their subordinate; in Confucianism, the hope is that everyone can be their own overseer.
[Of course, defense in depth is useful; it’s good to both have trust in the philosophical competence of the system and have lots of unit tests and restrictions in case you or it are confused.]
To be clear, I am definitely not arguing for a pure mechanism-design approach to all of AI alignment. The argument in the OP is relevant to inner optimizers because we can’t just directly choose which goals to program into them. We can directly choose which goals to program into an outer optimizer, and I definitely think that’s the right way to go.
If BigCo senior management were virtuous and benevolent, would they have poor rules?
That is to say, when I put my Confucian hat on, the whole system of selecting managers based on a proxy measure that’s gameable feels too Legalist. [The actual answer to my question is “getting rid of poor rules would be a low priority, because the poor rules wouldn’t impede righteous conduct, but they still would try to get rid of them.”]
Like, if I had to point at the difference between the two, the difference is where the put the locus of value. The Confucian ruler is primarily focused on making the state good, and surrounding himself with people who are primarily focused on making the state good. The Legalist ruler is primarily focused on surviving and thriving, and so tries to set up systems that cause people who are primarily focused on surviving and thriving to do the right thing. The Confucian imagines that you can have a large shared value; the Legalist imagines that you will necessarily have many disconnected and contradictory values.
The difference between hiring for regular companies and EA orgs seems relevant. Often, applicants for regular companies want the job, and standard practice is to attempt to trick the company into hiring them, regardless of qualification. Often, applicants for EA orgs only want the job if and only if they’re the right person for the job; if I’m trying to prevent asteroids from hitting the Earth (or w/e) and someone else could do a better job of it than I could, I very much want to get out of their way and have them do it instead of me. As you mention in the post, this just means you get rid of the part of interviews where gaming is intentional, and significant difficulty remains. [Like, people will be honest about their weaknesses and try to be honest about their strengths, but accurately measuring those and fit with the existing team remains quite difficult.]
Now, where they’re trying to put the locus of value doesn’t mean their policy prescriptions are helpful. As I understand the Confucian focus on virtue in the leader, the main value is that it’s really hard to have subordinates who are motivated by the common good if you yourself are selfish (both because they won’t have your example and because the people who are motivated by the common good will find it difficult to be motivated by working for you).
But I find myself feeling some despair at the prospect of a purely Legalist approach to AI Alignment, because it feels like it is fighting against the AI at every step, instead of being able to recruit it to do some of the work for you, and without that last bit I’m not sure how you get extrapolation instead of interpolation. Like, you can trust the Confucian to do the right thing in novel territory, insofar as you gave them the right underlying principles, and the Confucian is operating at a philosophical level where you can give them concepts like corrigibility (where they not only want to accept correction from you, but also want to preserve their ability to accept correction for you, and preserve their preservation of that ability, and so on) and the map-territory distinction (where they want their sensors to be honest, because in order to have lots of strawberries they need their strawberry-counter to be accurate instead of inaccurate). In Legalism, the hope is that the overseer can stay a step ahead of their subordinate; in Confucianism, the hope is that everyone can be their own overseer.
[Of course, defense in depth is useful; it’s good to both have trust in the philosophical competence of the system and have lots of unit tests and restrictions in case you or it are confused.]
To be clear, I am definitely not arguing for a pure mechanism-design approach to all of AI alignment. The argument in the OP is relevant to inner optimizers because we can’t just directly choose which goals to program into them. We can directly choose which goals to program into an outer optimizer, and I definitely think that’s the right way to go.