In any sizable organization, you can find a lot of roles. And a lot of people filling these roles—often multiple ones on the same day. Why do we use so many and fine-grained roles? Why don’t we continue with the coarse-grained and more stable occupations? Because the world got more complicated and everybody got more specialized and roles help with that. Division of labor means breaking down work previously done by one person into smaller parts that are done repeatedly in the same way—and can be assigned to actors: “You are now the widget-maker.” This works best when the tasks are easy to learn so it is easy to find someone to do it. But as humans are not plug-compatible and actual requirements may vary so there is always training required that can be amortized over repeatedly performing the task—a role. So roles make sense structurally—but why do people actually do what is expected of them and don’t just follow their own agenda? This an alignment problem—in this case between the organization and the agent—and we might learn something about the AI alignment problem from it.
In any sizable organization, you can find a lot of roles. And a lot of people filling these roles—often multiple ones on the same day. Why do we use so many and fine-grained roles? Why don’t we continue with the coarse-grained and more stable occupations? Because the world got more complicated and everybody got more specialized and roles help with that. Division of labor means breaking down work previously done by one person into smaller parts that are done repeatedly in the same way—and can be assigned to actors: “You are now the widget-maker.” This works best when the tasks are easy to learn so it is easy to find someone to do it. But as humans are not plug-compatible and actual requirements may vary so there is always training required that can be amortized over repeatedly performing the task—a role. So roles make sense structurally—but why do people actually do what is expected of them and don’t just follow their own agenda? This an alignment problem—in this case between the organization and the agent—and we might learn something about the AI alignment problem from it.