Thanks for writing up this post! It’s really similar in spirit to some research I’ve been working on with others, which you can find on the ArXiv here: https://arxiv.org/abs/2006.07532 We also model bounded goal-directed agents by assuming that the agent is running some algorithm given bounded compute, but our approach differs in the following ways:
We don’t attempt to compute full policies over the state space, since this is generally intractable, and also cognitively implausible, at least for agents like ourselves. Instead, we compute (partial) plans from initial states to goal states.
Rather than using RL algorithms like value iteration or SARSA, we assume that agents deploy some form of heuristic-guided model-based search, e.g. A*, MCTS, with a bounded computational budget. If search terminates before the goal is reached, then agents pursue a partial plan towards a promising intermediate state found during search.
“Promisingness” is dependent on the search heuristic used—a poor search heuristic will lead to highly non-optimal partial plans, whereas a good search heuristic will lead to partial plans that make significant progress to the goal, even if the goal itself isn’t reached.
Separating out the search heuristic from the search budget gives us at least at two different notions of agent-boundedness, roughly corresponding to competence vs. effort. An agent may be really good at search, but may not spend a large computational budget on it, or they may be bad at search, but spend a lot of time searching, and still get the right answer.
The abstract for the paper is below—hope it’s useful to read, and I’d be curious to hear your thoughts:
Online Bayesian Goal Inference for Boundedly-Rational Planning Agents
People routinely infer the goals of others by observing their actions over time. Remarkably, we can do so even when those actions lead to failure, enabling us to assist others when we detect that they might not achieve their goals. How might we endow machines with similar capabilities? Here we present an architecture capable of inferring an agent’s goals online from both optimal and non-optimal sequences of actions. Our architecture models agents as boundedly-rational planners that interleave search with execution by replanning, thereby accounting for sub-optimal behavior. These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent’s goals and internal planning processes. To perform such inference, we develop Sequential Inverse Plan Search (SIPS), a sequential Monte Carlo algorithm that exploits the online replanning assumption of these models, limiting computation by incrementally extending inferred plans as new actions are observed. We present experiments showing that this modeling and inference architecture outperforms Bayesian inverse reinforcement learning baselines, accurately inferring goals from both optimal and non-optimal trajectories involving failure and back-tracking, while generalizing across domains with compositional structure and sparse rewards.
Your paper looks great! It seems to tackle in a clean and formal way what I was vaguely pointing at. We’re currently reading a lot of papers and blog posts to prepare for an in-depth literature review about goal-directedness, and I added your paper to the list. I’ll try to come back here and comment after I read it.
Thanks for writing up this post! It’s really similar in spirit to some research I’ve been working on with others, which you can find on the ArXiv here: https://arxiv.org/abs/2006.07532 We also model bounded goal-directed agents by assuming that the agent is running some algorithm given bounded compute, but our approach differs in the following ways:
We don’t attempt to compute full policies over the state space, since this is generally intractable, and also cognitively implausible, at least for agents like ourselves. Instead, we compute (partial) plans from initial states to goal states.
Rather than using RL algorithms like value iteration or SARSA, we assume that agents deploy some form of heuristic-guided model-based search, e.g. A*, MCTS, with a bounded computational budget. If search terminates before the goal is reached, then agents pursue a partial plan towards a promising intermediate state found during search.
“Promisingness” is dependent on the search heuristic used—a poor search heuristic will lead to highly non-optimal partial plans, whereas a good search heuristic will lead to partial plans that make significant progress to the goal, even if the goal itself isn’t reached.
Separating out the search heuristic from the search budget gives us at least at two different notions of agent-boundedness, roughly corresponding to competence vs. effort. An agent may be really good at search, but may not spend a large computational budget on it, or they may be bad at search, but spend a lot of time searching, and still get the right answer.
The abstract for the paper is below—hope it’s useful to read, and I’d be curious to hear your thoughts:
Sorry for the delay in answering.
Your paper looks great! It seems to tackle in a clean and formal way what I was vaguely pointing at. We’re currently reading a lot of papers and blog posts to prepare for an in-depth literature review about goal-directedness, and I added your paper to the list. I’ll try to come back here and comment after I read it.