The former (that is, model-based RL-> agent). The latter (smart agent → model-based RL), I think, would be founded on a bit of a level error. At bottom, there are only atoms and the void. Whether something is “really” an agent is a question of how well we can describe this collection of atoms in terms of an agent-shaped model. This is different from the question of what abstractions humans used in the process of programming the AI; Like Rohin says, parts of the agent might be thought of as implicit in the programming, rather than explicit.
Sorry, I don’t know if I can direct you to any explicit sources. If you check out papers like Concrete Problems in AI Safety or others in that genre, though, you’ll see model-based RL used as a simplifying set of assumptions that imply agency.
The former (that is, model-based RL-> agent). The latter (smart agent → model-based RL), I think, would be founded on a bit of a level error. At bottom, there are only atoms and the void. Whether something is “really” an agent is a question of how well we can describe this collection of atoms in terms of an agent-shaped model. This is different from the question of what abstractions humans used in the process of programming the AI; Like Rohin says, parts of the agent might be thought of as implicit in the programming, rather than explicit.
Sorry, I don’t know if I can direct you to any explicit sources. If you check out papers like Concrete Problems in AI Safety or others in that genre, though, you’ll see model-based RL used as a simplifying set of assumptions that imply agency.