Already non-choosers can be made into an utility function.
That notion of chooser is sensible. I think it is important to differentiate between “giving a choice” and “forms a choice” ie whether it is the agent or the enviroment doing it. Seating a rock-bot in front of a chess board can be “giving a choice” without “forms a choice” ever happening (rock-bot is not a chooser). Simiarly while the environment “gives a choice to pull arm away” spook-bot never “forms a choice” (because it is literally unimaginable for it to do otherwise) and is not a chooser.
Even spook-bot is external situation consistent and doesn’t require being a chooser to do that. Only a chooser can ever be internal situation consistent (and even then it should be relativised to particular details of the internal state ie “Seems I can choose between A and B” and “Seems I can choose between A and B. Oh there is a puppy in the window.” are in the same bucket) but that is hard to approach as the agent is free to build representations as it wants.
So sure if you have an agent that is internal-situation-consistent along some of its internal situations details and you know what details those are then you can specify which bits of the agents internal state you can forget without impacting your ability to predict its external actions.
Going over this revealed a stepping stone I had been falling for. “Expected utility” involves mental representations and “utility expectation” is about statistics of which there might not be awereness. An agent that makes the choice with highest utility expectation is statistically as suffering-free as possible. An agent that makes the choice with highest expected utility is statistically minimally (subjectively) regretful.
Already non-choosers can be made into an utility function.
That notion of chooser is sensible. I think it is important to differentiate between “giving a choice” and “forms a choice” ie whether it is the agent or the enviroment doing it. Seating a rock-bot in front of a chess board can be “giving a choice” without “forms a choice” ever happening (rock-bot is not a chooser). Simiarly while the environment “gives a choice to pull arm away” spook-bot never “forms a choice” (because it is literally unimaginable for it to do otherwise) and is not a chooser.
Even spook-bot is external situation consistent and doesn’t require being a chooser to do that. Only a chooser can ever be internal situation consistent (and even then it should be relativised to particular details of the internal state ie “Seems I can choose between A and B” and “Seems I can choose between A and B. Oh there is a puppy in the window.” are in the same bucket) but that is hard to approach as the agent is free to build representations as it wants.
So sure if you have an agent that is internal-situation-consistent along some of its internal situations details and you know what details those are then you can specify which bits of the agents internal state you can forget without impacting your ability to predict its external actions.
Going over this revealed a stepping stone I had been falling for. “Expected utility” involves mental representations and “utility expectation” is about statistics of which there might not be awereness. An agent that makes the choice with highest utility expectation is statistically as suffering-free as possible. An agent that makes the choice with highest expected utility is statistically minimally (subjectively) regretful.