Maybe i’m in an echo chamber or have just had my head in the sand while working on AI 2027, but now that i’ve been paying attention to AI safety for almost 2 years and seen my timelines gradually collapse, I really want to engage with compelling arguments that might lengthen my timelines again.
I have feel like there are a bunch of viewpoints expressed about long timelines/slow takeoff but a lack of arguments. This is me reaching out in the hope that people might point me to the best existing write-ups or maybe make new ones!
I am tracking things like: “takeoff will be slow because of experiment compute bottlenecks,” or “timelines to AIs with good research taste are very long,” or even more general “look how bad AI is at all this (not-super-relevant-to-a-software-only-singularity-)stuff that is so easy for humans!” but in my opinion, these are just viewpoints (which by the way, seem to often get stated very confidently in a way that makes me not trust the epistemology behind them). So sadly these statements don’t tend to lengthen my timelines.
In my view, these viewpoints would become arguments if they were more like (excuse the spitballing):
“1e28 FLOPs of experiment compute is unlikely to produce much algorithmic progress + give a breakdown of why a compelling allocation of 1e28 FLOP doesn’t get very far”
“Research taste is in a different reference class to the things AI has been making progress on recently + compelling reasoning, like, maybe:
‘it has O(X) more degrees of freedom,’
‘it has way less existing data and/or it’s way harder to create data, or give a reward signal’
‘things are looking grim for the likelihood of generalization to these kinds of skills’
“there are XYZ properties intelligence needed that can’t be simulated by current hardware paradigms”
Currently I feel like I have a heavy tail on my timelines and takeoff speeds as a placeholder in lieu of arguments like this, that i’m hoping exist.
A brief history of the things that have most collapsed my timelines down since becoming aware of AI safety <2 years ago:
Fun with +12 OOMs of Compute IMO, a pretty compelling writeup that brought my ‘timelines to AGI uncertainty-over-training-compute-FLOP’ down a bunch
Generally working on AI 2027, which has included
Writing and reading the capabilities progression where each step seems plausible.
Researching how much compute is scaling.
Thinking about how naive and limiting current algorithms and architectures seem, and what changes they are plausibly going to be able to implement soon.
The detailed benchmarks+gaps argument in the timelines forecast.
The recent trend in METR’s time horizon data.