Do we still not have any better timelines reports than bio anchors? From the frame of bio anchors, GPT-4 is merely on the scale of two chinchillas, yet outperforms above-average humans at standardized tests. It’s not a good assumption that AI needs 1 quadrillion parameters to have human-level capabilities.
The general scaling laws are universal and also apply to biological brains, which naturally leads to a net-training compute timeline projection (there’s a new neurosci paper or two now applying scaling laws to animal intelligence that I’d discuss if/when I update that post)
Note I posted that a bit before GPT4, which used roughly human-brain lifetime compute for training and is proto-AGI (far more general in the sense of breadth of knowledge and mental skills than any one human, but still less capable than human experts at execution). We are probably now in the sufficient compute regime, given better software/algorithms.
I think the point of Bio Anchors was to give a big upper bound, and not say this is exactly when it will happen. At least that is how I perceive it. People who might be at a 101 level still probably have the impression that capabilities heavy AI is like multiple decades if not centuries away. The reason I have bio anchors here, is to try to point towards the fact that we have quite likely at most until 2048. Then based on that upper bound we can scale back further.
Do we still not have any better timelines reports than bio anchors? From the frame of bio anchors, GPT-4 is merely on the scale of two chinchillas, yet outperforms above-average humans at standardized tests. It’s not a good assumption that AI needs 1 quadrillion parameters to have human-level capabilities.
The general scaling laws are universal and also apply to biological brains, which naturally leads to a net-training compute timeline projection (there’s a new neurosci paper or two now applying scaling laws to animal intelligence that I’d discuss if/when I update that post)
Note I posted that a bit before GPT4, which used roughly human-brain lifetime compute for training and is proto-AGI (far more general in the sense of breadth of knowledge and mental skills than any one human, but still less capable than human experts at execution). We are probably now in the sufficient compute regime, given better software/algorithms.
I think the point of Bio Anchors was to give a big upper bound, and not say this is exactly when it will happen. At least that is how I perceive it. People who might be at a 101 level still probably have the impression that capabilities heavy AI is like multiple decades if not centuries away. The reason I have bio anchors here, is to try to point towards the fact that we have quite likely at most until 2048. Then based on that upper bound we can scale back further.
We have the recent OpenAI report that extends bio anchors—What a compute-centric framework says about takeoff speeds (https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/). There is a comment under meta-notes that mentioned that I plan to include updates to timelines and takeoff in a future draft based on this report.
I assume it’s incomplete. It doesn’t present the other 3 anchors mentioned, nor forecasting studies.