It’s not quite as interesting as I initially thought, since they allow handcrafted reward functions and heuristics. It would be more interesting if the designers did not know the particular task in advance, and the AI would be forced to learn the task entirely from demonstrations and/or natural language description.
Going from zero to “produce an AI that learns the task entirely from demonstrations and/or natural language description” is really hard for the modern AI research hive mind. You have to instead give it a shaped reward, breadcrumbs along the way that are easier, (such as allowing handcrafted heuristics and such, and allowing knowledge of a particular target task) to get the hive mind started making progress.
a research AI that will train agents based on its understanding of described benchmarks does sound interesting. Although how it would get ahold of things like ‘human feedback’, and craft setups for that, isn’t clear.*
Trying to create a setup so AIs can learn from other AIs (crafting rewards—seems unlikely. Expert demonstrations—might be doable. Whether this would be more or less useful than a human demonstration, I’m not sure. There actually might be the option to ask ‘what would _ do in this situation’ if you can actually run _.)
Edited to add:
*First I was imagining Skynet. Now I’m imagining a really weird license agreement: you may use this AI in exchange for [$ + billing schedule], but with the data that’s trained on (even if seemingly totally unrelated), you must also have it try working on this AI benchmark, (frequency based on usage), and publicly share the score (but not necessarily pictures of outputs—there’s the risk of user data being leaked, by means of being recreated in minecraft, and the user retains full responsibility for the security of such data, and is encouraged but not required to run it ‘offline’, i.e. not on Minecraft servers).
they allow handcrafted reward functions and heuristics
We allow it, but we don’t think it will lead to good performance (unless you throw a very large amount of time at it).
The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won’t be able to do so. That’s what we’ve tried to do here.
Note we do ban extraction of information from the Minecraft simulator—you have to work with pixels, so if you want to make handcrafted reward functions, you have to compute rewards from pixels somehow. (Technically you also have inventory information but that’s not that useful.) We have this rule because in a real-world deployment you wouldn’t be able to simply extract the “state” of physical reality.
I am a bit more worried about allowing heuristics—it’s plausible to me that our chosen tasks are simple enough that heuristics could solve them, even though real world tasks are too complex for similar heuristics to work—but this is basically a place where we’re sticking our necks out and saying “nope, heuristics won’t suffice either” (again, unless you put a lot of effort into designing the heuristics, where it would have been faster to just build the system that, say, learns from demonstrations).
It would be more interesting if the designers did not know the particular task in advance
But for real-world deployment of AI systems, designers do know the task in advance! We don’t want to ban strategies that designers could use in a realistic setting.
The AI safety community claims it is hard to specify reward functions… But for real-world deployment of AI systems, designers do know the task in advance!
Right, but you’re also going for tasks that are relatively simple and easy. In the sense that, “MakeWaterfall” is something that I can, based on my own experience, imagine solving without any ML at all (but ofc going to that extreme would require massive work). It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn’t scale to more complex tasks. If your task was e.g. “follow arbitrary natural language instructions” then I wouldn’t care about the “lax” rules.
Note we do ban extraction of information from the Minecraft simulator
This is certainly good, but I wonder what are the exact rules here. Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed? If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?
Not saying that what you’re doing is not useful, just pointing out a certain way in which the benchmark might diverge from its stated aim.
It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn’t scale to more complex tasks.
I agree that’s possible. Tbc, we did spend some time thinking about how we might use handcrafted rewards / heuristics to solve the tasks, and eliminated a couple based on this, so I think it probably won’t be true here.
Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed?
No.
If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?
For the competition, there’s a ban on pretrained models that weren’t publicly available prior to competition start. We look at participants’ training code to ensure compliance. It is still possible to violate this rule in a way that we may not catch (e.g. maybe you use internal simulator details to do hyperparameter tuning, and then hardcode the hyperparameters in your training code), but it seems quite challenging and not worth the effort even if you are willing to cheat.
For the benchmark (which is what I’m more excited about in the longer run), we’re relying on researchers to follow the rules. Science already relies on researchers honestly reporting their results—it’s pretty hard to catch cases where you just make up numbers for your experimental results.
(Also in the benchmark version, people are unlikely to write a paper about how they solved the task using special-case heuristics; that would be an embarrassing paper.)
The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won’t be able to do so. That’s what we’ve tried to do here.
It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems.
Additionally, developing a safe system and developing a nonsafe system are very different. Even if your reward function works 99,9% of the time it can be exploited in those cases where it fails.
Okay, regardless of what the AI safety community claims, I want to make that claim.
(I think a substantial chunk of the AI safety community also makes that claim but I’m not interested in defending that here.)
It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems.
As an aside, if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I’d be advocating really hard for sticking with task-specific AI systems and never building super general AI systems (or only building them after some really high threshold of safety was met).
if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I’d be advocating really hard for sticking with task-specific AI systems and never building super general AI systems
The problem with this is that you need an AI whose task is “protect humanity from unaligned AIs”, which is already very “general” in a way (i.e. requires operating on large scales of space, time and strategy). Unless you can effectively reduce this to many “narrow” tasks which is probably not impossible but also not easy.
I think it’s very easy to say “don’t do general system do task specific ones”, because general ones might promise a lot of economic returns.
A task like “Handle this Amazon customer query correctly” is already very general as it includes a host of different long tail issues about possible bugs that might appear (some of those unknown). If a customer faces an issue on a page that’s likely a bug, a customer service AI profits from understanding the code that produces the issue that the customer has.
Given the way economic pressures work, I see it as very probably that companies will just go ahead and look at what’s most efficient for their business goals.
It’s not quite as interesting as I initially thought, since they allow handcrafted reward functions and heuristics. It would be more interesting if the designers did not know the particular task in advance, and the AI would be forced to learn the task entirely from demonstrations and/or natural language description.
Going from zero to “produce an AI that learns the task entirely from demonstrations and/or natural language description” is really hard for the modern AI research hive mind. You have to instead give it a shaped reward, breadcrumbs along the way that are easier, (such as allowing handcrafted heuristics and such, and allowing knowledge of a particular target task) to get the hive mind started making progress.
To clarify, by hive mind, do you mean humans?
Yes. I’m being a bit whimsical here, I’m tickled by the analogy to training neural nets.
In general:
a research AI that will train agents based on its understanding of described benchmarks does sound interesting. Although how it would get ahold of things like ‘human feedback’, and craft setups for that, isn’t clear.*
Trying to create a setup so AIs can learn from other AIs (crafting rewards—seems unlikely. Expert demonstrations—might be doable. Whether this would be more or less useful than a human demonstration, I’m not sure. There actually might be the option to ask ‘what would _ do in this situation’ if you can actually run _.)
Edited to add:
*First I was imagining Skynet. Now I’m imagining a really weird license agreement: you may use this AI in exchange for [$ + billing schedule], but with the data that’s trained on (even if seemingly totally unrelated), you must also have it try working on this AI benchmark, (frequency based on usage), and publicly share the score (but not necessarily pictures of outputs—there’s the risk of user data being leaked, by means of being recreated in minecraft, and the user retains full responsibility for the security of such data, and is encouraged but not required to run it ‘offline’, i.e. not on Minecraft servers).
It’s not “from zero” though, I think that we already have ML techniques that should be applicable here.
We allow it, but we don’t think it will lead to good performance (unless you throw a very large amount of time at it).
The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won’t be able to do so. That’s what we’ve tried to do here.
Note we do ban extraction of information from the Minecraft simulator—you have to work with pixels, so if you want to make handcrafted reward functions, you have to compute rewards from pixels somehow. (Technically you also have inventory information but that’s not that useful.) We have this rule because in a real-world deployment you wouldn’t be able to simply extract the “state” of physical reality.
I am a bit more worried about allowing heuristics—it’s plausible to me that our chosen tasks are simple enough that heuristics could solve them, even though real world tasks are too complex for similar heuristics to work—but this is basically a place where we’re sticking our necks out and saying “nope, heuristics won’t suffice either” (again, unless you put a lot of effort into designing the heuristics, where it would have been faster to just build the system that, say, learns from demonstrations).
But for real-world deployment of AI systems, designers do know the task in advance! We don’t want to ban strategies that designers could use in a realistic setting.
Right, but you’re also going for tasks that are relatively simple and easy. In the sense that, “MakeWaterfall” is something that I can, based on my own experience, imagine solving without any ML at all (but ofc going to that extreme would require massive work). It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn’t scale to more complex tasks. If your task was e.g. “follow arbitrary natural language instructions” then I wouldn’t care about the “lax” rules.
This is certainly good, but I wonder what are the exact rules here. Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed? If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?
Not saying that what you’re doing is not useful, just pointing out a certain way in which the benchmark might diverge from its stated aim.
I agree that’s possible. Tbc, we did spend some time thinking about how we might use handcrafted rewards / heuristics to solve the tasks, and eliminated a couple based on this, so I think it probably won’t be true here.
No.
For the competition, there’s a ban on pretrained models that weren’t publicly available prior to competition start. We look at participants’ training code to ensure compliance. It is still possible to violate this rule in a way that we may not catch (e.g. maybe you use internal simulator details to do hyperparameter tuning, and then hardcode the hyperparameters in your training code), but it seems quite challenging and not worth the effort even if you are willing to cheat.
For the benchmark (which is what I’m more excited about in the longer run), we’re relying on researchers to follow the rules. Science already relies on researchers honestly reporting their results—it’s pretty hard to catch cases where you just make up numbers for your experimental results.
(Also in the benchmark version, people are unlikely to write a paper about how they solved the task using special-case heuristics; that would be an embarrassing paper.)
It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems.
Additionally, developing a safe system and developing a nonsafe system are very different. Even if your reward function works 99,9% of the time it can be exploited in those cases where it fails.
Okay, regardless of what the AI safety community claims, I want to make that claim.
(I think a substantial chunk of the AI safety community also makes that claim but I’m not interested in defending that here.)
As an aside, if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I’d be advocating really hard for sticking with task-specific AI systems and never building super general AI systems (or only building them after some really high threshold of safety was met).
The problem with this is that you need an AI whose task is “protect humanity from unaligned AIs”, which is already very “general” in a way (i.e. requires operating on large scales of space, time and strategy). Unless you can effectively reduce this to many “narrow” tasks which is probably not impossible but also not easy.
I think it’s very easy to say “don’t do general system do task specific ones”, because general ones might promise a lot of economic returns.
A task like “Handle this Amazon customer query correctly” is already very general as it includes a host of different long tail issues about possible bugs that might appear (some of those unknown). If a customer faces an issue on a page that’s likely a bug, a customer service AI profits from understanding the code that produces the issue that the customer has.
Given the way economic pressures work, I see it as very probably that companies will just go ahead and look at what’s most efficient for their business goals.
It’s not clear that a system which doesn’t use reward doesn’t have the same issue (relative to “99,9% of the time”).