The utility function is a mathematical function. It simply evaluates whatever hypothetical universe-history you feed it.
The question of where the agent gets its expected future-universe-history from is more interesting though, and it’s something you’re right to be sceptical about. Here we’re talking about bounded rationality and all sorts of wonderful things beyond the scope of the original post (also for the purposes of discussion I’m pretending to be something more closely resembling an expected-utility-maximizer than what I actually am).
The utility function is a mathematical function. It simply evaluates whatever hypothetical universe-history you feed it.
The question of where the agent gets its expected future-universe-history from is more interesting though, and it’s something you’re right to be sceptical about. Here we’re talking about bounded rationality and all sorts of wonderful things beyond the scope of the original post (also for the purposes of discussion I’m pretending to be something more closely resembling an expected-utility-maximizer than what I actually am).