Another point—“What I am introducing into the dialogue is a theoretical and conceptually stable unit of value.”
Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it’s impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.
Another point—“What I am introducing into the dialogue is a theoretical and conceptually stable unit of value.”
Without a full and perfect system model, I argued that creating perfectly aligned metrics, liek a unit of value, is impossible. (To be fair, I really argued that point in the follow-up piece; www.ribbonfarm.com/2016/09/29/soft-bias-of-underspecified-goals/ ) So if our model for human values is simplified in any way, it’s impossible to guarantee convergence to the same goal without a full and perfect systems model to test it against.