I partway agree with this: it is much harder to compensate with people than to determine what the problem is.
The reason I still see determining the principal-agent problem as a hard problem with people is that we are highly inconsistent: a single AI is more consistent then a single person, and much more consistent than several people in succession (as is the case with any normal job).
My model for this is that determining what the problem is costs only slightly more for a person than for the AI, but you will have to repeat the process many times for a human position, probably about once per person to fill it.
I see, so the argument is mostly that jobs are performed more stably and so you can learn better how to deal with the principal-agent problems that arise. This seems plausible.
I partway agree with this: it is much harder to compensate with people than to determine what the problem is.
The reason I still see determining the principal-agent problem as a hard problem with people is that we are highly inconsistent: a single AI is more consistent then a single person, and much more consistent than several people in succession (as is the case with any normal job).
My model for this is that determining what the problem is costs only slightly more for a person than for the AI, but you will have to repeat the process many times for a human position, probably about once per person to fill it.
I see, so the argument is mostly that jobs are performed more stably and so you can learn better how to deal with the principal-agent problems that arise. This seems plausible.