I really like the framing here, of asking whether we’ll see massive compute automation before [AI capability level we’re interested in]. I often hear people discuss nearby questions using IMO much more confusing abstractions, for example:
“How much is AI capabilities driven by algorithmic progress?” (problem: obscures dependence of algorithmic progress on compute for experimentation)
“How much AI progress can we get ‘purely from elicitation’?” (lots of problems, e.g. that eliciting a capability might first require a (possibly one-time) expenditure of compute for exploration)
My inside view sense is that the feasibility of takeover-capable AI without massive compute automation is about 75% likely if we get AIs that dominate top-human-experts prior to 2040.[6] Further, I think that in practice, takeover-capable AI without massive compute automation is maybe about 60% likely.
Is this because:
You think that we’re >50% likely to not get AIs that dominate top human experts before 2040? (I’d be surprised if you thought this.)
The words “the feasibility of” importantly change the meaning of your claim in the first sentence? (I’m guessing it’s this based on the following parenthetical, but I’m having trouble parsing.)
Overall, it seems like you put substantially higher probability than I do on getting takeover capable AI without massive compute automation (and especially on getting a software-only singularity). I’d be very interested in understanding why. A brief outline of why this doesn’t seem that likely to me:
My read of the historical trend is that AI progress has come from scaling up all of the factors of production in tandem (hardware, algorithms, compute expenditure, etc.).
Scaling up hardware production has always been slower than scaling up algorithms, so this consideration is already factored into the historical trends. I don’t see a reason to believe that algorithms will start running away with the game.
Maybe you could counter-argue that algorithmic progress has only reflected returns to scale from AI being applied to AI research in the last 12-18 months and that the data from this period is consistent with algorithms becoming more relatively important relative to other factors?
I don’t see a reason that “takeover-capable” is a capability level at which algorithmic progress will be deviantly important relative to this historical trend.
I’d be interested either in hearing you respond to this sketch or in sketching out your reasoning from scratch.
Thanks, this is helpful. So it sounds like you expect
AI progress which is slower than the historical trendline (though perhaps fast in absolute terms) because we’ll soon have finished eating through the hardware overhang
separately, takeover-capable AI soon (i.e. before hardware manufacturers have had a chance to scale substantially).
It seems like all the action is taking place in (2). Even if (1) is wrong (i.e. even if we see substantially increased hardware production soon), that makes takeover-capable AI happen faster than expected; IIUC, this contradicts the OP, which seems to expect takeover-capable AI to happen later if it’s preceded by substantial hardware scaling.
In other words, it seems like in the OP you care about whether takeover-capable AI will be preceded by massive compute automation because:
[this point still holds up] this affects how legible it is that AI is a transformative technology
[it’s not clear to me this point holds up] takeover-capable AI being preceded by compute automation probably means longer timelines
The second point doesn’t clearly hold up because if we don’t see massive compute automation, this suggests that AI progress slower than the historical trend.