I’m not sure what do you see in the distinction between simple preference and complex preference. No matter how simple an imperfect agent is, you face a problem of going from imperfect decision-making to ideal preference order.
I don’t mean simple or complicated preferences. I mean a simple mind (perhaps simple was a bad choice of terminology). My “simple mind” is a mind that perfectly knows it’s utility function (and has a well-defined utility function to begin with). It’s just an abstraction to better understand where shouldness comes from.
I don’t mean simple or complicated preferences. I mean a simple mind (perhaps simple was a bad choice of terminology). My “simple mind” is a mind that perfectly knows it’s utility function (and has a well-defined utility function to begin with). It’s just an abstraction to better understand where shouldness comes from.