One idea as to the source of the potential discrepancy… did any of the task prompts for the tasks in which it did figure out how to use tools tell it explicitly to “use the objects to reach a higher floor,” or something similar? I’m wondering if the cases where it did use tools are examples where doing so was instrumentally useful to achieving a prompted objective that didn’t explicitly require tool use.
None of the prompts tell it what to do, they aren’t even in english. (Or so I think? correct me if I’m wrong!) Instead they are in propositional logic, using atoms that refer to objects, colors, relations, and players. They just give the reward function in disjunctive normal form (i.e. big chain of disjunctions) and present it to the agent to observe.
One idea as to the source of the potential discrepancy… did any of the task prompts for the tasks in which it did figure out how to use tools tell it explicitly to “use the objects to reach a higher floor,” or something similar? I’m wondering if the cases where it did use tools are examples where doing so was instrumentally useful to achieving a prompted objective that didn’t explicitly require tool use.
None of the prompts tell it what to do, they aren’t even in english. (Or so I think? correct me if I’m wrong!) Instead they are in propositional logic, using atoms that refer to objects, colors, relations, and players. They just give the reward function in disjunctive normal form (i.e. big chain of disjunctions) and present it to the agent to observe.