Your timeline was off, but I think your original comment will turn out to have had the right idea. Given the leaps from GPT-3.5 to GPT-4 to Devin to Claude 3.5-Sonnet w/scaffolding, marginal seeming updates to models are turning out to be quite substantial in effective capability. It’s hard to create evaluation harnesses for fuzzy, abstract things like the syntax complexity models can handle, and those abilities do transfer to using the models to automate their own post-training tasks, e.g. like what the self-alignment backtranslation paper’s scaling charts showed. The better the model, the better they accomplish these tasks with worse task definitions and less help. The piles of prompts necessary for current agents will be less and less necessary, at some point generated on the fly to meek descriptions like “make a code agent to do ${task}” by the models themselves. Whatever human effort will go into the next generation of unprincipled scaffolding will provide yet greater returns to future models. These factors combined, I expect SWE-Bench progress to be discontinuous and rapid, as it has been so far.
A very naive extrapolation using polynomial regression from SWE-Bench scores suggests ≥80% by November 2025. I used model release dates for my x-value. Important to note models may be contaminated too.
Not looking good for my prediction: https://www.swebench.com/
Your timeline was off, but I think your original comment will turn out to have had the right idea. Given the leaps from GPT-3.5 to GPT-4 to Devin to Claude 3.5-Sonnet w/scaffolding, marginal seeming updates to models are turning out to be quite substantial in effective capability. It’s hard to create evaluation harnesses for fuzzy, abstract things like the syntax complexity models can handle, and those abilities do transfer to using the models to automate their own post-training tasks, e.g. like what the self-alignment backtranslation paper’s scaling charts showed. The better the model, the better they accomplish these tasks with worse task definitions and less help. The piles of prompts necessary for current agents will be less and less necessary, at some point generated on the fly to meek descriptions like “make a code agent to do ${task}” by the models themselves. Whatever human effort will go into the next generation of unprincipled scaffolding will provide yet greater returns to future models. These factors combined, I expect SWE-Bench progress to be discontinuous and rapid, as it has been so far.
A very naive extrapolation using polynomial regression from SWE-Bench scores suggests ≥80% by November 2025. I used model release dates for my x-value. Important to note models may be contaminated too.
Yeah, I expect it to fall soon but I will lose my bet if it doesn’t happen in a month.