That is not a coherent criticism of utilitarianism. Do you understand what it is that you are criticising?
Yes, I do… and it’s not utilitarianism. ;-)
What I’m criticizing is the built-in System 2 motivation-comprehending model whose function is predicting the actions of others, but which usually fails when applied to self, because it doesn’t model all of the relevant System 1 features.
If you try to build a human-values-friendly AI, or decide what would be of benefit to a person (or people), and you base it on System 2′s model, you will get mistakes, because System 2′s map of System 1 is flawed, in the same way that Newtonian physics is flawed for predicting near-light-speed mechanics: it leaves out important terms.
Yes, I do… and it’s not utilitarianism. ;-)
What I’m criticizing is the built-in System 2 motivation-comprehending model whose function is predicting the actions of others, but which usually fails when applied to self, because it doesn’t model all of the relevant System 1 features.
If you try to build a human-values-friendly AI, or decide what would be of benefit to a person (or people), and you base it on System 2′s model, you will get mistakes, because System 2′s map of System 1 is flawed, in the same way that Newtonian physics is flawed for predicting near-light-speed mechanics: it leaves out important terms.