I like the framing of the problem here: If a bureaucrat (or lawyer) acts on fully specified set of rules and exercises no personal judgement then they can be replaced by a machine. If they don’t want to be replaced by a machine they should be able to prove that their personal judgement is indispensable.
That changes incentives for bureaucrats in quite a dramatic fashion.
Wow, that’s an implication I hadn’t considered! But you’re right on the money with that one.
The one danger I see here is that very simple models can often account for ~70% of the variance in a particular area. People might be tempted to automate these decisions. But the remaining 30% is often highly complex and critical! So personally I wouldn’t automate a bureaucrat until something like 95% match or 99% match.
Though I’m sure there are bureaucrats who can be modeled 100% accurately, and they should be replaced. :P
I like the framing of the problem here: If a bureaucrat (or lawyer) acts on fully specified set of rules and exercises no personal judgement then they can be replaced by a machine. If they don’t want to be replaced by a machine they should be able to prove that their personal judgement is indispensable.
That changes incentives for bureaucrats in quite a dramatic fashion.
Wow, that’s an implication I hadn’t considered! But you’re right on the money with that one.
The one danger I see here is that very simple models can often account for ~70% of the variance in a particular area. People might be tempted to automate these decisions. But the remaining 30% is often highly complex and critical! So personally I wouldn’t automate a bureaucrat until something like 95% match or 99% match.
Though I’m sure there are bureaucrats who can be modeled 100% accurately, and they should be replaced. :P