Okay, I’ve probably captured the gist of your position now. Correct me if I’m speaking something out of its character below.
Humans are descriptively not utility maximizers, they can only be modeled this way under coarse approximation and with a fair number of exceptions. There seems to be no reason to normatively model them with some ideal utility maximizer, to apply the concepts like should in more rigorous sense of decision theory.
Humans do what they do, not what they “should” according to some rigorous external model. This is an argument and intuition similar to not listening to philosopher-constructed rules of morality, non-intuitive conclusions reached from considering a thought experiment, or God-declared moral rules, since you first have to accept each moral rule yourself, according to your own criteria, which might even be circular.
Okay, I’ve probably captured the gist of your position now. Correct me if I’m speaking something out of its character below.
Humans are descriptively not utility maximizers, they can only be modeled this way under coarse approximation and with a fair number of exceptions. There seems to be no reason to normatively model them with some ideal utility maximizer, to apply the concepts like should in more rigorous sense of decision theory.
Humans do what they do, not what they “should” according to some rigorous external model. This is an argument and intuition similar to not listening to philosopher-constructed rules of morality, non-intuitive conclusions reached from considering a thought experiment, or God-declared moral rules, since you first have to accept each moral rule yourself, according to your own criteria, which might even be circular.