Note that when I said
(we don’t need any fancy engineering or arbitrary choices to figure out AUs/optimal value from the agent’s perspective).
I meant we could just consider how the agent’s AUs are changing without locating a human in the environment.
Cool. We’re probably on the same page then.
Note that when I said
I meant we could just consider how the agent’s AUs are changing without locating a human in the environment.
Cool. We’re probably on the same page then.