Not all decision-making algorithms work by preferring outcomes, and not all decision-making algorithms that work by preferring outcomes have preferences that form a total preorder over outcomes, which is what would be required to losslessly translate those preferences into a utility function. Many reasonable kinds of decision-making algorithms (for example, ones that have ceteris paribus preferences) do not meet that requirement, including the sorts we see in real world agents. I see no reason to restrict ourselves to the subset that do.
To be honest I’m not sure what you mean. I don’t think so?
An agent makes decisions by some procedure. For some agents, the decisions that procedure produces can be viewed as choosing the more preferred outcome (i.e. when given a choice between A and B, if its decision procedure deterministically chooses A we’d describe that as “preferring A over B”). For some of those agents, the decisions they make have some additional properties, like that they always either consistently choose A over B or are consistently indifferent between them. When you have an agent like that and combine it with probabilistic reasoning, you get agent whose decision-making can be compressed into a single utility function.
Already non-choosers can be made into an utility function.
That notion of chooser is sensible. I think it is important to differentiate between “giving a choice” and “forms a choice” ie whether it is the agent or the enviroment doing it. Seating a rock-bot in front of a chess board can be “giving a choice” without “forms a choice” ever happening (rock-bot is not a chooser). Simiarly while the environment “gives a choice to pull arm away” spook-bot never “forms a choice” (because it is literally unimaginable for it to do otherwise) and is not a chooser.
Even spook-bot is external situation consistent and doesn’t require being a chooser to do that. Only a chooser can ever be internal situation consistent (and even then it should be relativised to particular details of the internal state ie “Seems I can choose between A and B” and “Seems I can choose between A and B. Oh there is a puppy in the window.” are in the same bucket) but that is hard to approach as the agent is free to build representations as it wants.
So sure if you have an agent that is internal-situation-consistent along some of its internal situations details and you know what details those are then you can specify which bits of the agents internal state you can forget without impacting your ability to predict its external actions.
Going over this revealed a stepping stone I had been falling for. “Expected utility” involves mental representations and “utility expectation” is about statistics of which there might not be awereness. An agent that makes the choice with highest utility expectation is statistically as suffering-free as possible. An agent that makes the choice with highest expected utility is statistically minimally (subjectively) regretful.
Not all decision-making algorithms work by preferring outcomes, and not all decision-making algorithms that work by preferring outcomes have preferences that form a total preorder over outcomes, which is what would be required to losslessly translate those preferences into a utility function. Many reasonable kinds of decision-making algorithms (for example, ones that have ceteris paribus preferences) do not meet that requirement, including the sorts we see in real world agents. I see no reason to restrict ourselves to the subset that do.
So the phenomenological meaning is what you centrally mean?
I do not advocate for any of the 3 meanings, but I want to figure out what you are against.
To me a utility function is a description of the agents existences impact and even saying that it refers to an algorithm is a misuse of the concept.
To be honest I’m not sure what you mean. I don’t think so?
An agent makes decisions by some procedure. For some agents, the decisions that procedure produces can be viewed as choosing the more preferred outcome (i.e. when given a choice between A and B, if its decision procedure deterministically chooses A we’d describe that as “preferring A over B”). For some of those agents, the decisions they make have some additional properties, like that they always either consistently choose A over B or are consistently indifferent between them. When you have an agent like that and combine it with probabilistic reasoning, you get agent whose decision-making can be compressed into a single utility function.
Already non-choosers can be made into an utility function.
That notion of chooser is sensible. I think it is important to differentiate between “giving a choice” and “forms a choice” ie whether it is the agent or the enviroment doing it. Seating a rock-bot in front of a chess board can be “giving a choice” without “forms a choice” ever happening (rock-bot is not a chooser). Simiarly while the environment “gives a choice to pull arm away” spook-bot never “forms a choice” (because it is literally unimaginable for it to do otherwise) and is not a chooser.
Even spook-bot is external situation consistent and doesn’t require being a chooser to do that. Only a chooser can ever be internal situation consistent (and even then it should be relativised to particular details of the internal state ie “Seems I can choose between A and B” and “Seems I can choose between A and B. Oh there is a puppy in the window.” are in the same bucket) but that is hard to approach as the agent is free to build representations as it wants.
So sure if you have an agent that is internal-situation-consistent along some of its internal situations details and you know what details those are then you can specify which bits of the agents internal state you can forget without impacting your ability to predict its external actions.
Going over this revealed a stepping stone I had been falling for. “Expected utility” involves mental representations and “utility expectation” is about statistics of which there might not be awereness. An agent that makes the choice with highest utility expectation is statistically as suffering-free as possible. An agent that makes the choice with highest expected utility is statistically minimally (subjectively) regretful.