I haven’t looked into this in detail but I would be quite surprised if Voyager didn’t do any of that?
Although I’m not sure whether what you’re asking for is exactly what you’re looking for. It seems straightforward that if you train/fine-tune a model on examples of people playing a game that involves leveraging [very helpful but not strictly necessary] resources, you are going to get an AI capable of that.
It would be more non-trivial if you got an RL agent doing that, especially if it didn’t stumble into that strategy/association “I need to do X, so let me get Y first” by accident but rather figured that Y tends to be helpful for X via some chain of associations.
I haven’t looked into this in detail but I would be quite surprised if Voyager didn’t do any of that?
Although I’m not sure whether what you’re asking for is exactly what you’re looking for. It seems straightforward that if you train/fine-tune a model on examples of people playing a game that involves leveraging [very helpful but not strictly necessary] resources, you are going to get an AI capable of that.
It would be more non-trivial if you got an RL agent doing that, especially if it didn’t stumble into that strategy/association “I need to do X, so let me get Y first” by accident but rather figured that Y tends to be helpful for X via some chain of associations.