I’m not sure if AIXI has a “nature”/personality as such though? I suppose this might be encoded in the initial utility function somehow, but I’m not sure if it’s feasible to include all these kinds of scenarios in advance.
That agent “recomputes decisions” is in any case not a valid argument for it being unable to precommit. Precommitment through inability to render certain actions is a workaround, not a necessity: a better decision theory won’t be performing those actions of its own accord.
So: me neither—I was only saying that arguing from “recomputing its actions at every time-step”, to “lacking precommitment” was an invalid chain of reasoning.
I’m not sure if AIXI has a “nature”/personality as such though? I suppose this might be encoded in the initial utility function somehow, but I’m not sure if it’s feasible to include all these kinds of scenarios in advance.
That agent “recomputes decisions” is in any case not a valid argument for it being unable to precommit. Precommitment through inability to render certain actions is a workaround, not a necessity: a better decision theory won’t be performing those actions of its own accord.
So: me neither—I was only saying that arguing from “recomputing its actions at every time-step”, to “lacking precommitment” was an invalid chain of reasoning.