There’s a bunch of considerations and models mixed together in this post. Here’s a way I’m factoring some of them, which other people may also find useful.
I’d consider counterfactuality the main top-level node; things which would have been done anyway have radically different considerations from things which wouldn’t. E.g. doing an eval which (carefully, a little bit at a time) mimics what e.g. chaosGPT does, in a controlled environment prior to release, seems straightforwardly good so long as people were going to build chaosGPT soon anyway. It’s a direct improvement over something which would have happened quickly anyway in the absence of the eval. That argument still holds even if a bunch of the other stuff in the post is totally wrong or totally the wrong way of thinking about things (e.g. I largely agree with habryka’s comment about comprehensibility of future LM-based agents).
On the other hand, building a better version of chaosGPT which users would not have tried anyway, or building it much sooner, is at least not obviously an improvement. I would say that’s probably a bad idea, but that’s where the rest of the models in the post start to be relevant to the discussion.
Alas, we don’t actually know ahead of time which things people will/won’t counterfactually be tried, so there’s some grey zone. But at least this frame makes it clear that “what would people counterfactually try anyway?” is a key subquestion.
(Side note: also remember that counterfactuality gets trickier in multiplayer scenarios where players are making decisions based on their expectations of other players. We don’t want a situation where all the major labs build chaosGPT because they expect all the others to do so anyway. But in the case of chaosGPT, multiplayer considerations aren’t really relevant, because somebody was going to build the thing regardless of whether they expected OpenAI/Deepmind/Anthropic to build the thing. And I expect that’s the prototypical case; the major labs don’t actually have enough of a moat for small-game multiplayer dynamics to be a very good model here.)
There’s a bunch of considerations and models mixed together in this post. Here’s a way I’m factoring some of them, which other people may also find useful.
I’d consider counterfactuality the main top-level node; things which would have been done anyway have radically different considerations from things which wouldn’t. E.g. doing an eval which (carefully, a little bit at a time) mimics what e.g. chaosGPT does, in a controlled environment prior to release, seems straightforwardly good so long as people were going to build chaosGPT soon anyway. It’s a direct improvement over something which would have happened quickly anyway in the absence of the eval. That argument still holds even if a bunch of the other stuff in the post is totally wrong or totally the wrong way of thinking about things (e.g. I largely agree with habryka’s comment about comprehensibility of future LM-based agents).
On the other hand, building a better version of chaosGPT which users would not have tried anyway, or building it much sooner, is at least not obviously an improvement. I would say that’s probably a bad idea, but that’s where the rest of the models in the post start to be relevant to the discussion.
Alas, we don’t actually know ahead of time which things people will/won’t counterfactually be tried, so there’s some grey zone. But at least this frame makes it clear that “what would people counterfactually try anyway?” is a key subquestion.
(Side note: also remember that counterfactuality gets trickier in multiplayer scenarios where players are making decisions based on their expectations of other players. We don’t want a situation where all the major labs build chaosGPT because they expect all the others to do so anyway. But in the case of chaosGPT, multiplayer considerations aren’t really relevant, because somebody was going to build the thing regardless of whether they expected OpenAI/Deepmind/Anthropic to build the thing. And I expect that’s the prototypical case; the major labs don’t actually have enough of a moat for small-game multiplayer dynamics to be a very good model here.)