Thanks for the comment, David! It also caused me to go back and read this post again, which sparked quite a few old flames in the brain.
I agree that a collection of different approaches to ensuring AI alignment would be interesting! This is something that I’m hoping (now planning!) to capture in part with my exploration of scenario modeling that’s coming down the pipe. But, a brief overview of the different analytical approaches to AI alignment, would be helpful (if it doesn’t already exist in an updated form that I’m unaware of).
I agree with your insight that Weber’s description here can be generalized to moral and judicial systems for society. I suspect if we went looking into Weber’s writing we might find similar analogies here as well.
I agree with your comment on the limitations of hierarchy for human bureaucracies. Fixed competencies and hierarchical flows benefit from bottom up information flows and agile adaptation. However, I think this reinforces my point about machine beamte and AGI controlled through this method. For the same sorts of benefits of agility and modification by human organizations, you might think that we would want to restrict these things for machine agents to deliberately sacrifice benefits from adaptation in favor of aligned interests and controllability.
Thanks for the feedback! I can imagine some more posts in this direction non the future.
Thanks for the comment, David! It also caused me to go back and read this post again, which sparked quite a few old flames in the brain.
I agree that a collection of different approaches to ensuring AI alignment would be interesting! This is something that I’m hoping (now planning!) to capture in part with my exploration of scenario modeling that’s coming down the pipe. But, a brief overview of the different analytical approaches to AI alignment, would be helpful (if it doesn’t already exist in an updated form that I’m unaware of).
I agree with your insight that Weber’s description here can be generalized to moral and judicial systems for society. I suspect if we went looking into Weber’s writing we might find similar analogies here as well.
I agree with your comment on the limitations of hierarchy for human bureaucracies. Fixed competencies and hierarchical flows benefit from bottom up information flows and agile adaptation. However, I think this reinforces my point about machine beamte and AGI controlled through this method. For the same sorts of benefits of agility and modification by human organizations, you might think that we would want to restrict these things for machine agents to deliberately sacrifice benefits from adaptation in favor of aligned interests and controllability.
Thanks for the feedback! I can imagine some more posts in this direction non the future.