In other words, I don’t see any point modeling intelligent yet not omniscient+deterministic decision making unless the utility at a given state includes an anticipation of expectation of future states.
There’s no point in discussing “utility maximisers”—rather than “expected utility maximisers”?
I don’t really agree—“utility maximisers” is a simple generalisation of the concept of “expected utility maximiser”. Since there are very many ways of predicting the future, this seems like a useful abstraction to me.
...anyway, if you were wrapping a model a human, the actions would clearly be based on predictions of future events. If you mean you want the prediction process to be abstracted out in the wrapper, obviously there is no easy way to do that.
You could claim that a human—while a “utility maximiser” was not clearly an “expected utility maximiser”. My wrapper doesn’t disprove such a claim. I generally think that the “expected utility maximiser” claim is highly appropriate for a human as well—but there is not such a neat demonstration of this.
There’s no point in discussing “utility maximisers”—rather than “expected utility maximisers”?
I don’t really agree—“utility maximisers” is a simple generalisation of the concept of “expected utility maximiser”. Since there are very many ways of predicting the future, this seems like a useful abstraction to me.
...anyway, if you were wrapping a model a human, the actions would clearly be based on predictions of future events. If you mean you want the prediction process to be abstracted out in the wrapper, obviously there is no easy way to do that.
You could claim that a human—while a “utility maximiser” was not clearly an “expected utility maximiser”. My wrapper doesn’t disprove such a claim. I generally think that the “expected utility maximiser” claim is highly appropriate for a human as well—but there is not such a neat demonstration of this.