I like this genre of task. I didn’t quite understand what you meant about being able to score the human immediately—presumably we’re interested in how to human could do given more learning, also?
The ‘match words’ idea specifically is useful—I was trying to think of a simple “game theory”-y game that wouldn’t be memorized.
I like this genre of task. I didn’t quite understand what you meant about being able to score the human immediately—presumably we’re interested in how to human could do given more learning, also?
Yes, I suppose so. I assumed (without noticing I was doing so) that humans wouldn’t get that much better at the ‘match words’ game given more learning time than the 6-hour baseline of task length. But that is not necessarily true. I do think a lot of the relevant learning is in-context and varies from instance to instance (“how are the other players playing? what strategies are being used in this game? how are their strategies evolving in response to the game?”).
It seems like a good consideration to bear in mind when selecting games for this genre of task: the amount that the task gets easier for humans given learning time (hence how much time you’d need to invest evaluating human performance).
Another bucket of games that might be good fodder for this task genre are ‘social deduction’ where deception, seeing through deception, and using allegiances are crucial subtasks. I think for social deduction games, or for manipulation and deception in general, the top capability level achievable by humans is exceedingly high (it’s more chess-like than tic-tac-toe-like), and would take a lot of time to attain. It’s high because the better your opponent is, the better you need to be.
Possible tweaks to the ‘match words’ game to introduce deception:
introduce the possibility that some players may have other goals, e.g. trying to minimize their own scores, or minimize/maximize group/team scores.
introduce the facility for players to try to influence each others’ behaviour between rounds (e.g. by allowing private and public chat between players). This would facilitate the building of alliances / reciprocal behaviour / tit-for-tat.
I think doing the AI version (bots and/or LLMs) makes sense as a starting point, then we should be able to add the human versions later if we want. I think it’s fine for the thing anchoring it to human performance is to be comparison of performance compared to humans playing against the same opponents, not literally playing against humans.
One thing is that tasks where there’s a lot of uncertainty about what exactly the setup is and what distribution the opponents / black box functions / etc are drawn from, this can be unhelpfully high-variance—in the sense that the agent’s score depends really heavily on its assumptions about what the other agents are and what they will do, rather than only measuring capability. So I think it’s a good idea to give the agent reasonable information about the distribution of opponents, even if you still include uncertainty.
I have some code for setting up a simple black box game inside our infra that you could adapt for this if that’s useful. In general I think the structure of starting a server on localhost that implements the game and then telling the agent in the prompt that it needs to send queries to that server works well if you want the agent to interact with some program without being able to see / edit it. I think open-source versions could also be interesting, where you tell the model more about the opponents including the prompts for the other models or the source code, and see how well it can use that information to perform better.
I like this genre of task. I didn’t quite understand what you meant about being able to score the human immediately—presumably we’re interested in how to human could do given more learning, also?
The ‘match words’ idea specifically is useful—I was trying to think of a simple “game theory”-y game that wouldn’t be memorized.
You can email task-support@evals.alignment.org if you have other questions and also to receive our payment form for your idea.
Yes, I suppose so. I assumed (without noticing I was doing so) that humans wouldn’t get that much better at the ‘match words’ game given more learning time than the 6-hour baseline of task length. But that is not necessarily true. I do think a lot of the relevant learning is in-context and varies from instance to instance (“how are the other players playing? what strategies are being used in this game? how are their strategies evolving in response to the game?”).
It seems like a good consideration to bear in mind when selecting games for this genre of task: the amount that the task gets easier for humans given learning time (hence how much time you’d need to invest evaluating human performance).
Another bucket of games that might be good fodder for this task genre are ‘social deduction’ where deception, seeing through deception, and using allegiances are crucial subtasks. I think for social deduction games, or for manipulation and deception in general, the top capability level achievable by humans is exceedingly high (it’s more chess-like than tic-tac-toe-like), and would take a lot of time to attain. It’s high because the better your opponent is, the better you need to be.
Possible tweaks to the ‘match words’ game to introduce deception:
introduce the possibility that some players may have other goals, e.g. trying to minimize their own scores, or minimize/maximize group/team scores.
introduce the facility for players to try to influence each others’ behaviour between rounds (e.g. by allowing private and public chat between players). This would facilitate the building of alliances / reciprocal behaviour / tit-for-tat.
I think doing the AI version (bots and/or LLMs) makes sense as a starting point, then we should be able to add the human versions later if we want. I think it’s fine for the thing anchoring it to human performance is to be comparison of performance compared to humans playing against the same opponents, not literally playing against humans.
One thing is that tasks where there’s a lot of uncertainty about what exactly the setup is and what distribution the opponents / black box functions / etc are drawn from, this can be unhelpfully high-variance—in the sense that the agent’s score depends really heavily on its assumptions about what the other agents are and what they will do, rather than only measuring capability. So I think it’s a good idea to give the agent reasonable information about the distribution of opponents, even if you still include uncertainty.
I have some code for setting up a simple black box game inside our infra that you could adapt for this if that’s useful. In general I think the structure of starting a server on localhost that implements the game and then telling the agent in the prompt that it needs to send queries to that server works well if you want the agent to interact with some program without being able to see / edit it.
I think open-source versions could also be interesting, where you tell the model more about the opponents including the prompts for the other models or the source code, and see how well it can use that information to perform better.