I believe we are already in a substantial hardware and data overhang, and that within the next 24 months the threshold of capability of LLM agents will suffice to begin recursive self-improvement. This means it is likely that a leader in AI at that time will come into possession of strongly super-human AI (if they choose to engage their LLM agents in RSI)
Just FYI, I think I’d be willing to bet against this at 1:1 at around $500, whereby I don’t expect that a cutting edge model will start to train other models of a greater capability level than itself, or make direct edits to its own weights that have large effects (e.g. 20%+ improvements on a broad swath of tasks) on its performance, and the best model in the world will not be one whose training was primarily led by another model.
If you wish to take me up on this, I’d propose any of John Wentworth or Rob Bensinger or Lawrence Chan (i.e. whichever of them ends up available) as adjudicators if we disagree on whether this has happened.
Cool! Ok, yeah. So, I’m happy with any of the arbiters you proposed. I mean, I’m willing to make the bet because I don’t think it will come down to a close call, but will instead be clear.
I do think that there’s some substantial chance that the process of RSI will begin in the next 24 months, but not become publicly known right away. So my ask related to this would be:
At the end of 24 months, we resolve the bet based on what is publicly known. If, within the 12 month period following the end of the 24 month period, it becomes publicly known that a process started during the 24 month period has now come to fruition and is clearly the leading model, we reverse the decision of the bet from ‘leading not from RSI’ to ‘leading model from RSI’.
Since my hypothesis is stating that the RSI result would be something extraordinary, beyond what would otherwise be projected from the development trend we’ve seen from human researchers, I think that a situation which ends up as a close call, akin to what feels like a ‘tie’, should resolve with my hypothesis losing.
Alright, you have yourself a bet! Let’s return on August 23rd 2026 to see who’s made $500. I’ve sent you a calendar invite to help us remmeber the date.
I’ll ping the arbiters just to check they’re down, may suggest an alt if one of them opts out.
(For the future: I think in future I will suggest arbiters to a betting-counterparty via private DM, so it’s easier for the arbiters to opt out for whatever reason, or for one of us to reject them for whatever reason.)
Just FYI, I think I’d be willing to bet against this at 1:1 at around $500, whereby I don’t expect that a cutting edge model will start to train other models of a greater capability level than itself, or make direct edits to its own weights that have large effects (e.g. 20%+ improvements on a broad swath of tasks) on its performance, and the best model in the world will not be one whose training was primarily led by another model.
If you wish to take me up on this, I’d propose any of John Wentworth or Rob Bensinger or Lawrence Chan (i.e. whichever of them ends up available) as adjudicators if we disagree on whether this has happened.
Cool! Ok, yeah. So, I’m happy with any of the arbiters you proposed. I mean, I’m willing to make the bet because I don’t think it will come down to a close call, but will instead be clear.
I do think that there’s some substantial chance that the process of RSI will begin in the next 24 months, but not become publicly known right away. So my ask related to this would be:
At the end of 24 months, we resolve the bet based on what is publicly known. If, within the 12 month period following the end of the 24 month period, it becomes publicly known that a process started during the 24 month period has now come to fruition and is clearly the leading model, we reverse the decision of the bet from ‘leading not from RSI’ to ‘leading model from RSI’.
Since my hypothesis is stating that the RSI result would be something extraordinary, beyond what would otherwise be projected from the development trend we’ve seen from human researchers, I think that a situation which ends up as a close call, akin to what feels like a ‘tie’, should resolve with my hypothesis losing.
Alright, you have yourself a bet! Let’s return on August 23rd 2026 to see who’s made $500. I’ve sent you a calendar invite to help us remmeber the date.
I’ll ping the arbiters just to check they’re down, may suggest an alt if one of them opts out.
(For the future: I think in future I will suggest arbiters to a betting-counterparty via private DM, so it’s easier for the arbiters to opt out for whatever reason, or for one of us to reject them for whatever reason.)
I’m down.
Manifold market here: https://manifold.markets/MaxHarms/will-ai-be-recursively-self-improvi
I’d take that bet! If you are up for it, you can dm me about details.