Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?
Re: Does that mean that I should mechanically overwrite my beliefs about the chance of a lottery ticket winning, in order to maximize my expectation of the payout?
No, it doesn’t. It means that the process going on in the brains of intelligent agents can be well modelled as calculating expected utilities—and then selecting the action that corresponds to the largest one.
Intelligent agents are better modelled as Expected Utility Maximisers than Utility Maximisers. Whether they actually maximise utility depends on whether they are in an environment where their expectations pan out.
By definition, intelligent agents want to maximize total utility. In the absence of perfect knowledge, they act on expected utility calculations. Is this not a meaningful distinction?