No, that’s helpful. If it were the right way, do you think this reasoning would apply?
Edit: alternatively, if a proposal does decompose an agent into world-model/goals/planning (as IRL does), does the argument stand that we should try to analyze the behavior of a Bayesian agent with a large model class which implements the idea?
… Plausibly? Idk, it’s very hard for me to talk about the validity of intuitions in an informal, intuitive model that I don’t share. I don’t see anything obviously wrong with it.
There’s the usual issue that Bayesian reasoning doesn’t properly account for embeddedness, but I don’t think that would make much of a difference here.
No, that’s helpful. If it were the right way, do you think this reasoning would apply?
Edit: alternatively, if a proposal does decompose an agent into world-model/goals/planning (as IRL does), does the argument stand that we should try to analyze the behavior of a Bayesian agent with a large model class which implements the idea?
… Plausibly? Idk, it’s very hard for me to talk about the validity of intuitions in an informal, intuitive model that I don’t share. I don’t see anything obviously wrong with it.
There’s the usual issue that Bayesian reasoning doesn’t properly account for embeddedness, but I don’t think that would make much of a difference here.