Do we still not have any better timelines reports than bio anchors? From the frame of bio anchors, GPT-4 is merely on the scale of two chinchillas, yet outperforms above-average humans at standardized tests. It’s not a good assumption that AI needs 1 quadrillion parameters to have human-level capabilities.
The general scaling laws are universal and also apply to biological brains, which naturally leads to a net-training compute timeline projection (there’s a new neurosci paper or two now applying scaling laws to animal intelligence that I’d discuss if/when I update that post)
Note I posted that a bit before GPT4, which used roughly human-brain lifetime compute for training and is proto-AGI (far more general in the sense of breadth of knowledge and mental skills than any one human, but still less capable than human experts at execution). We are probably now in the sufficient compute regime, given better software/algorithms.
I think the point of Bio Anchors was to give a big upper bound, and not say this is exactly when it will happen. At least that is how I perceive it. People who might be at a 101 level still probably have the impression that capabilities heavy AI is like multiple decades if not centuries away. The reason I have bio anchors here, is to try to point towards the fact that we have quite likely at most until 2048. Then based on that upper bound we can scale back further.
This is well-crafted. Thank you for writing this, Markov.
Participants of the ML4Good bootcamps, students of the university course I organized, and students of AISF from AIS Sweden were very happy to be able to read your summary instead of having to read the numerous papers in the corresponding AGISF curriculum, the reviews were really excellent.
Perhaps a note on Pre-Requisites would be useful. E.g. the level of math & comp sci that’s assumed. Suggestion: try going through the topics to 50+ random strangers. Wildly useful for improving written work.
Newcomers to the AI Safety arguments might be under the impression that there will be discrete cutoffs, i.e. either we have HLAI or we dont. The point of (t,n) AGI is to give a picture of what a continuous increase in capabilities looks like. It is also slightly more formal than the simple “words based” definitions of AGI. If you know of a more precise mathematical formulation of the notion of general and super intelligences, I would love if you could point me towards it so that I can include that in the post.
As for Four Background Claims, the reason for inclusion is to provide an intuition behind why general intelligence is important. And that even though future systems might be intelligent it is not the default case that they will either care about our goals, or even follow our goals in the way as intended by the designers.
Do we still not have any better timelines reports than bio anchors? From the frame of bio anchors, GPT-4 is merely on the scale of two chinchillas, yet outperforms above-average humans at standardized tests. It’s not a good assumption that AI needs 1 quadrillion parameters to have human-level capabilities.
The general scaling laws are universal and also apply to biological brains, which naturally leads to a net-training compute timeline projection (there’s a new neurosci paper or two now applying scaling laws to animal intelligence that I’d discuss if/when I update that post)
Note I posted that a bit before GPT4, which used roughly human-brain lifetime compute for training and is proto-AGI (far more general in the sense of breadth of knowledge and mental skills than any one human, but still less capable than human experts at execution). We are probably now in the sufficient compute regime, given better software/algorithms.
I think the point of Bio Anchors was to give a big upper bound, and not say this is exactly when it will happen. At least that is how I perceive it. People who might be at a 101 level still probably have the impression that capabilities heavy AI is like multiple decades if not centuries away. The reason I have bio anchors here, is to try to point towards the fact that we have quite likely at most until 2048. Then based on that upper bound we can scale back further.
We have the recent OpenAI report that extends bio anchors—What a compute-centric framework says about takeoff speeds (https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/). There is a comment under meta-notes that mentioned that I plan to include updates to timelines and takeoff in a future draft based on this report.
I assume it’s incomplete. It doesn’t present the other 3 anchors mentioned, nor forecasting studies.
This is well-crafted. Thank you for writing this, Markov.
Participants of the ML4Good bootcamps, students of the university course I organized, and students of AISF from AIS Sweden were very happy to be able to read your summary instead of having to read the numerous papers in the corresponding AGISF curriculum, the reviews were really excellent.
Perhaps a note on Pre-Requisites would be useful.
E.g. the level of math & comp sci that’s assumed.
Suggestion: try going through the topics to 50+ random strangers. Wildly useful for improving written work.
I don’t understand how the parts fit together. For example, what’s the point of presenting the (t-,n)-AGI framework or the Four Background Claims?
Newcomers to the AI Safety arguments might be under the impression that there will be discrete cutoffs, i.e. either we have HLAI or we dont. The point of (t,n) AGI is to give a picture of what a continuous increase in capabilities looks like. It is also slightly more formal than the simple “words based” definitions of AGI. If you know of a more precise mathematical formulation of the notion of general and super intelligences, I would love if you could point me towards it so that I can include that in the post.
As for Four Background Claims, the reason for inclusion is to provide an intuition behind why general intelligence is important. And that even though future systems might be intelligent it is not the default case that they will either care about our goals, or even follow our goals in the way as intended by the designers.