Thanks, this is helpful. So it sounds like you expect
AI progress which is slower than the historical trendline (though perhaps fast in absolute terms) because we’ll soon have finished eating through the hardware overhang
separately, takeover-capable AI soon (i.e. before hardware manufacturers have had a chance to scale substantially).
It seems like all the action is taking place in (2). Even if (1) is wrong (i.e. even if we see substantially increased hardware production soon), that makes takeover-capable AI happen faster than expected; IIUC, this contradicts the OP, which seems to expect takeover-capable AI to happen later if it’s preceded by substantial hardware scaling.
In other words, it seems like in the OP you care about whether takeover-capable AI will be preceded by massive compute automation because:
[this point still holds up] this affects how legible it is that AI is a transformative technology
[it’s not clear to me this point holds up] takeover-capable AI being preceded by compute automation probably means longer timelines
The second point doesn’t clearly hold up because if we don’t see massive compute automation, this suggests that AI progress slower than the historical trend.
Thanks for writing this reflection, I found it useful.
Just to quickly comment on my own epistemic state here:
I haven’t read GD.
But I’ve been stewing on some of (what I think are) the same ideas for the last few months, when William Brandon first made (what I think are) similar arguments to me in October.
(You can judge from this Twitter discussion whether I seem to get the core ideas)
When I first heard these arguments, they struck me as quite important and outside of the wheelhouse of previous thinking on risks from AI development. I think they raise concerns that I don’t currently know how to refute around “even if we solve technical AI alignment, we still might lose control over our future.”
That said, I’m currently in a state of “I don’t know what to do about GD-type issues, but I have a lot of ideas about what to do about technical alignment.” For me at least, I think this creates an impulse to dismiss away GD-type concerns, so that I can justify continuing doing something where “the work cut out for me” (if not in absolute terms, then at least relative to working on GD-type issues).
In my case in particular I think it actually makes sense to keep working on technical alignment (because I think it’s going pretty productively).
But I think that other people who work (or are considering working in) technical alignment or governance should maybe consider trying to make progress on understanding and solving GD-type issues (assuming that’s possible).