After further review, this is probably beyond capabilities for the moment.
Also, the most important part of this kind of experiment is predicting in advance what reward schedules will produce what values within the agent, such that we can zero-shot transfer that knowledge to other task types (e.g. XLAND instead of Minecraft) and say “I want an agent which goes to high-elevation platforms reliably across situations, with low labelling cost”, and then sketch out a reward schedule, and have the first capable agents trained using that schedule generalize in the way you want.
Why is this difficult? Is it only difficult to do this in Challenge Mode—if you could just code in “Number of chickens” as a direct feed to the agent, can it be done then? I was thinking about this today, and got to wondering why it was hard—at what step does an experiment to do this fail?
Even if you can code in number of chickens as an input to the reward function, that doesn’t mean you can reliably get the agent to generalize to protect chickens. That input probably makes the task easier than in Challenge Mode, but not necessarily easy. The agent could generalize to some other correlate. Like ensuring there are no skeletons nearby (because they might shoot nearby chickens), but not in order to protect the chickens.
So, if I understand correctly, the way we would consider it likely that the correct generalisation had happened would be if the agent could generalise to hazards it had never seen actually kill chickens before? And this would require the agent to have an actual model of how chickens can be threatened such that it could predict that lava would destroy chickens based on, say, it’s knowledge that it will die if it jumps into lava, which is beyond capabilities at the moment?
Yes, that would be the desired generalization in the situations we checked. If that happens, we had specified a behavioral generalization property and then wrote down how we were going to get it, and then had just been right in predicting that that training rationale would go through.
After further review, this is probably beyond capabilities for the moment.
Also, the most important part of this kind of experiment is predicting in advance what reward schedules will produce what values within the agent, such that we can zero-shot transfer that knowledge to other task types (e.g. XLAND instead of Minecraft) and say “I want an agent which goes to high-elevation platforms reliably across situations, with low labelling cost”, and then sketch out a reward schedule, and have the first capable agents trained using that schedule generalize in the way you want.
Why is this difficult? Is it only difficult to do this in Challenge Mode—if you could just code in “Number of chickens” as a direct feed to the agent, can it be done then? I was thinking about this today, and got to wondering why it was hard—at what step does an experiment to do this fail?
Even if you can code in
number of chickens
as an input to the reward function, that doesn’t mean you can reliably get the agent to generalize to protect chickens. That input probably makes the task easier than in Challenge Mode, but not necessarily easy. The agent could generalize to some other correlate. Like ensuring there are no skeletons nearby (because they might shoot nearby chickens), but not in order to protect the chickens.So, if I understand correctly, the way we would consider it likely that the correct generalisation had happened would be if the agent could generalise to hazards it had never seen actually kill chickens before? And this would require the agent to have an actual model of how chickens can be threatened such that it could predict that lava would destroy chickens based on, say, it’s knowledge that it will die if it jumps into lava, which is beyond capabilities at the moment?
Yes, that would be the desired generalization in the situations we checked. If that happens, we had specified a behavioral generalization property and then wrote down how we were going to get it, and then had just been right in predicting that that training rationale would go through.