In my analysis of Tom Davidson’s “Takeoff Speeds Report,” I found that the dynamics of AI capability improvement, as discussed in the context of a software-only singularity, align closely with the original simplified equation ( I′(t) = cI + f(I)I^2 ) from my 4 year old post on Modeling continuous progress. Essentially, that post describes how we switch from exponential to hyperbolic growth as the fraction of AI research done by AIs improves along a logistic curve. These are all features of the far more complex mathematical model in tom’s report.
In this equation, ( I ) represents the intelligence or capability of the AI system. It correlates to the cognitive output or the efficiency of the AI as described in the report, where the focus is on the software improvements contributing to the overall effectiveness of AI systems. The term ( cI ) in the equation can be likened to the constant external effort or input in improving AI systems, which is consistent with the ongoing research and development efforts mentioned in the report. This part of the equation represents the incremental improvements in AI capabilities due to human-led development efforts.
The second term in the equation, ( f(I)I^2 ), is particularly significant in understanding the relationship with the software-only singularity concept. Here, ( f(I) ) is a function that determines the extent to which the AI system can use its intelligence to improve itself, essentially a measure of recursive self-improvement (RSI). In the report, the discussion about a software-only singularity uses a similar concept, where the AI systems reach a point where their self-improvement significantly accelerates their capability growth. This is analogous to ( f(I) ) increasing, leading to a more substantial contribution of ( I^2 ) (the AI’s self-improvement efforts) to the overall rate of intelligence growth, ( I′(t) ). As the AI systems become more capable, they contribute more to their own development, a dynamic that both the equation and the report capture. The report has a ‘FLOP gap’ from when AIs start to contribute to research at all to when they fully take over which is essentially the upper and lower bounds to fit the f(I) curve to. Otherwise, the overall rate of change is sharper in tom’s report as I ignored increasing investment and increasing compute in my model focussing only on software self improvement feedback loops.
One other thing I liked about Tom’s report is it’s focus on relatively outside viewy bio anchors and epoch AIs direct approach estimates for what is needed for TAI.
Maybe this is an unreasonable demand, but one concern I have about all of these alleged attempts to measure the ability of an AI to automate scientific research, is that this feels like a situation where it’s unusually slippery and unusually easy to devise a metric that doesn’t actually capture what’s needed to dramatically accelerate research and development. Ideally, I’d like a metric where we know, as a matter of necessity, that a very high score means that the system would be able to considerably speed up research.
For example, the direct approach estimation does have this property, where if you can replicate to a certain level of accuracy what a human expert would say over a certain horizon length, you do in some sense have to be able to match or replicate the underlying thinking that produced it, which means being able to do long horizon tasks, but of course, that’s a very vague upper bound. It’s not perfect, the Horizon Length metric might only cover the 90th percentile of tasks at each time scale. The remaining 10th percentile might contain harder, more important tasks necessary for AI progress
I think trying to anticipate and list in a task all the capabilities you think you need to automate scientific progress when we don’t really know what those are will lead to a predictable underestimate of what’s required.
Tom Davidson’s report: https://docs.google.com/document/d/1rw1pTbLi2brrEP0DcsZMAVhlKp6TKGKNUSFRkkdP_hs/edit?usp=drivesdk
My old 2020 post: https://www.lesswrong.com/posts/66FKFkWAugS8diydF/modelling-continuous-progress
In my analysis of Tom Davidson’s “Takeoff Speeds Report,” I found that the dynamics of AI capability improvement, as discussed in the context of a software-only singularity, align closely with the original simplified equation ( I′(t) = cI + f(I)I^2 ) from my 4 year old post on Modeling continuous progress. Essentially, that post describes how we switch from exponential to hyperbolic growth as the fraction of AI research done by AIs improves along a logistic curve. These are all features of the far more complex mathematical model in tom’s report.
In this equation, ( I ) represents the intelligence or capability of the AI system. It correlates to the cognitive output or the efficiency of the AI as described in the report, where the focus is on the software improvements contributing to the overall effectiveness of AI systems. The term ( cI ) in the equation can be likened to the constant external effort or input in improving AI systems, which is consistent with the ongoing research and development efforts mentioned in the report. This part of the equation represents the incremental improvements in AI capabilities due to human-led development efforts.
The second term in the equation, ( f(I)I^2 ), is particularly significant in understanding the relationship with the software-only singularity concept. Here, ( f(I) ) is a function that determines the extent to which the AI system can use its intelligence to improve itself, essentially a measure of recursive self-improvement (RSI). In the report, the discussion about a software-only singularity uses a similar concept, where the AI systems reach a point where their self-improvement significantly accelerates their capability growth. This is analogous to ( f(I) ) increasing, leading to a more substantial contribution of ( I^2 ) (the AI’s self-improvement efforts) to the overall rate of intelligence growth, ( I′(t) ). As the AI systems become more capable, they contribute more to their own development, a dynamic that both the equation and the report capture. The report has a ‘FLOP gap’ from when AIs start to contribute to research at all to when they fully take over which is essentially the upper and lower bounds to fit the f(I) curve to. Otherwise, the overall rate of change is sharper in tom’s report as I ignored increasing investment and increasing compute in my model focussing only on software self improvement feedback loops.
One other thing I liked about Tom’s report is it’s focus on relatively outside viewy bio anchors and epoch AIs direct approach estimates for what is needed for TAI.
Maybe this is an unreasonable demand, but one concern I have about all of these alleged attempts to measure the ability of an AI to automate scientific research, is that this feels like a situation where it’s unusually slippery and unusually easy to devise a metric that doesn’t actually capture what’s needed to dramatically accelerate research and development. Ideally, I’d like a metric where we know, as a matter of necessity, that a very high score means that the system would be able to considerably speed up research.
For example, the direct approach estimation does have this property, where if you can replicate to a certain level of accuracy what a human expert would say over a certain horizon length, you do in some sense have to be able to match or replicate the underlying thinking that produced it, which means being able to do long horizon tasks, but of course, that’s a very vague upper bound. It’s not perfect, the Horizon Length metric might only cover the 90th percentile of tasks at each time scale. The remaining 10th percentile might contain harder, more important tasks necessary for AI progress
I think trying to anticipate and list in a task all the capabilities you think you need to automate scientific progress when we don’t really know what those are will lead to a predictable underestimate of what’s required.