https://www.elilifland.com/. You can give me anonymous feedback here. I often change my mind and don’t necessarily endorse past writings.
elifland
A few possible categories of situations we might have long timelines, off the top of my head:
Benchmarks + gaps is still best: overall gap is somewhat larger + slowdown in compute doubling time after 2028, but trend extrapolations still tell us something about gap trends: This is how I would most naturally think about how timelines through maybe the 2030s are achieved, and potentially beyond if neither of the next hold.
Others are best (more than of one of these can be true):
The current benchmarks and evaluations are so far away from AGI that trends on them don’t tell us anything (including regarding how fast gaps might be crossed). In this case one might want to identify the 1-2 most important gaps and reason about when we will cross these based on gears-level reasoning or trend extrapolation/forecasting on “real-world” data (e.g. revenue?) rather than trend extrapolation on benchmarks. Example candidate “gaps” that I often hear for these sorts of cases are the lack of feedback loops and the “long-tail of tasks” / reliability.
A paradigm shift in AGI training is needed and benchmark trends don’t tell us much about when we will achieve this (this is basically Steven’s sibling comment): in this case the best analysis might involve looking at the base rate of paradigm shifts per research effort, and/or looking at specific possible shifts.
^ this taxonomy is not comprehensive, just things I came up with quickly. Might be missing something that would be good.
To cop out answer your question, I feel like if I were making a long-timelines argument I’d argue that all 3 of those would be ways of forecasting to give weight to, then aggregate. If I had to choose just one I’d probably still go with (1) though.
edit: oh there’s also the “defer to AI experts” argument. I mostly try not to think about deference-based arguments because thinking on the object-level is more productive, though I think if I were really trying to make an all-things-considered timelines distribution there’s some chance I would adjust to longer due to deference arguments (but also some chance I’d adjust toward shorter, given that lots of people who have thought deeply about AGI / are close to the action have short timelines).
There’s also “base rate of super crazy things happening is low” style arguments which I don’t give much weight to.
For context in a sibling comment Ryan said and Steven agreed with:
It sounds like your disagreement isn’t with drawing a link from RE-bench to (forecasts for) automating research engineering, but is instead with thinking that you can get AGI shortly after automating research engineering due to AI R&D acceleration and already being pretty close. Is that right?
Note that the comment says research engineering, not research scientists.
Now responding on whether I think the no new paradigms assumption is needed:
(Obviously you’re entitled to argue / believe that we don’t need need new AI paradigms and concepts to get to AGI! It’s a topic where I think reasonable people disagree. I’m just suggesting that it’s a necessary assumption for your argument to hang together, right?)
I generally have not been thinking in these sorts of binary terms but instead thinking in terms more like “Algorithmic progress research is moving at pace X today, if we had automated research engineers it would be sped up to N*X.” I’m not necessarily taking a stand on whether the progress will involve new paradigms or not, so I don’t think it requires an assumption of no new paradigms.
However:
If you think almost all new progress in some important sense will come from paradigm shifts, the forecasting method becomes weaker because the incremental progress doesn’t say as much about progress toward automated research engineering or AGI.
You might think that it’s more confusing than clarifying to think in terms of collapsing all research progress into a single “speed” and forecasting based on that.
Requiring a paradigm shift might lead to placing less weight on lower amounts of research effort required, and even if the probability distribution is the same what we should expect to see in the world leading up to AGI is not.
I’d also add that:
Regarding what research tasks I’m forecasting for the automated research engineer: REBench is not supposed to fully represent the tasks involved in actual research engineering. That’s why we have the gaps.
Regarding to what extent having an automated research engineer would speed up progress in worlds in which we need a paradigm shift: I think it’s hard to separate out conceptual from engineering/empirical work in terms of progress toward new paradigms. My guess would be being able to implement experiments very cheaply would substantially increase the expected number of paradigm shifts per unit time.
Here’s the structure of the argument that I am most compelled by (I call it the benchmarks + gaps argument), I’m uncertain about the details.
Focus on the endpoint of substantially speeding up AI R&D / automating research engineering. Let’s define our timelines endpoint as something that ~5xs the rate of AI R&D algorithmic progress (compared to a counterfactual world with no post-2024 AIs). Then make an argument that ~fully automating research engineering (experiment implementation/monitoring) would do this, along with research taste of at least the 50th percentile AGI company researcher (experiment ideation/selection).
Focus on REBench since it’s the most relevant benchmark. REBench is the most relevant benchmark here, for simplicity I’ll focus on only this though for robustness more benchmarks should be considered.
Based on trend extrapolation and benchmark base rates, roughly 50% we’ll saturate REBench by end of 2025.
Identify the most important gaps between saturating REBench and the endpoint defined in (1). The most important gaps between saturating REBench and achieving the 5xing AI R&D algorithmic progress are: (a) time horizon as measured by human time spent (b) tasks with worse feedback loops (c) tasks with large codebases (d) becoming significantly cheaper and/or faster than humans. There are some more but my best guess is that these 4 are the most important, should also take into account unknown gaps.
When forecasting the time to cross the gaps, it seems quite plausible that we get to the substantial AI R&D speedup within a few years after saturating REBench, so by end of 2028 (and significantly earlier doesn’t seem crazy).
This is the most important part of the argument, and one that I have lots of uncertainty over. We have some data regarding the “crossing speed” of some of the gaps but the data are quite limited at the moment. So there are a lot of judgment calls needed and people with strong long timelines intuitions might think the remaining gaps will take a long time to cross without this being close to falsified by our data.
This is broken down into “time to cross the gaps at 2024 pace of progress” → adjusting based on compute forecasts and intermediate AI R&D speedups before reaching 5x.
From substantial AI R&D speedup to AGI. Once we have the 5xing AIs, that’s potentially already AGI by some definitions but if you have a stronger one, the possibility of a somewhat fast takeoff means you might get it within a year or so after.
One reason I like this argument is that it will get much stronger over time as we get more difficult benchmarks and otherwise get more data about how quickly the gaps are being crossed.
I have a longer draft which makes this argument but it’s quite messy and incomplete and might not add much on top of the above summary for now. Unfortunately I’m prioritizing other workstreams over finishing this at the moment. DM me if you’d really like a link to the messy draft.
Thanks. I edited again to be more precise. Maybe I’m closer to the median than I thought.
(edit: unimportant clarification. I just realized “you all” may have made it sound like I thought every single person on the Lightcone team was higher than my p(doom). I meant it to be more like a generic y’all to represent the group, not a claim about the minimum p(doom) of the team)
Yeah I meant more on p(doom)/alignment difficulty than timelines, I’m not sure what your guys’ timelines are. I’m roughly in the 35-55% ballpark for a misaligned takeover, and my impression is that you all are closer to but not necessarily all the way at the >90% Eliezer view. If that’s also wrong I’ll edit to correct.
edit: oh maybe my wording of “farther” in the original comment was specifically confusing and made it sound like I was talking about timelines. I will edit to clarify.
Appreciate the post. I’ve previously donated $600 through the EA Manifund thing and will consider donating again late this year / early next year when thinking through donations more broadly.
I’ve derived lots of value with regards to thinking through AI futures from LW/AIAF content (some non-exhaustive standouts: 2021 MIRI conversations, List of Lethalities and Paul response, t-AGI framework, Without specific countermeasures..., Hero Licensing). It’s unclear to me how much of the value would have been retained if LW didn’t exist, but plausibly LW is responsible for a large fraction.
In a few ways I feel not fully/spiritually aligned with the LW team and the rationalist community: my alignment difficulty/p(doom()[1] is farther from Eliezer’s[2] than my perception of the median of the LW team[3] (though closer to Eliezer than most EAs), I haven’t felt sucked in by most of Eliezer’s writing, and I feel gut level cynical about people’s ability to deliberatively improve their rationality (edit: with large effect size) (I haven’t spent a long time examining evidence to decide whether I really believe this).
But still LW has probably made a large positive difference in my life, and I’m very thankful. I’ve also enjoyed Lighthaven, but I have to admit I’m not very observant and opinionated on conference venues (or web design, which is why I focused on LW’s content).
Twitter AI (xAI), which seemingly had no prior history of strong AI engineering, with a small team and limited resources
Both of these seem false.
Re: talent, see from their website:
They don’t list their team on their site, but I know their early team includes Igor Babuschkin who has worked at OAI and DeepMind, and Christian Szegedy who has 250k+ citations including several foundational papers.
Re: resources, according to Elon’s early July tweet (ofc take Elon with a grain of salt) Grok 2 was trained on 24k H100s (approximately 3x the FLOP/s of GPT-4, according to SemiAnalysis). And xAI was working on a 100k H100 cluster that was on track to be finished in July. Also they raised $6B in May.
And internally, we have an anonymous RSP non-compliance reporting line so that any employee can raise concerns about issues like this without any fear of retaliation.
Are you able to elaborate on how this works? Are there any other details about this publicly, couldn’t find more detail via a quick search.
Some specific qs I’m curious about: (a) who handles the anonymous complaints, (b) what is the scope of behavior explicitly (and implicitly re: cultural norms) covered here, (c) handling situations where a report would deanonymize the reporter (or limit them to a small number of people)?
Thanks for the response!
I also expect that if we did develop some neat new elicitation technique we thought would trigger yellow-line evals, we’d re-run them ahead of schedule.
[...]
I also think people might be reading much more confidence into the 30% than is warranted; my contribution to this process included substantial uncertainty about what yellow-lines we’d develop for the next round
Thanks for these clarifications. I didn’t realize that the 30% was for the new yellow-line evals rather than the current ones.
Since triggering a yellow-line eval requires pausing until we have either safety and security mitigations or design a better yellow-line eval with a higher ceiling, doing so only risks the costs of pausing when we could have instead prepared mitigations or better evals
I’m having trouble parsing this sentence. What you mean by “doing so only risks the costs of pausing when we could have instead prepared mitigations or better evals”? Doesn’t pausing include focusing on mitigations and evals?
From the RSP Evals report:
As a rough attempt at quantifying the elicitation gap, teams informally estimated that, given an additional three months of elicitation improvements and no additional pretraining, there is a roughly 30% chance that the model passes our current ARA Yellow Line, a 30% chance it passes at least one of our CBRN Yellow Lines, and a 5% chance it crosses cyber Yellow Lines. That said, we are currently iterating on our threat models and Yellow Lines so these exact thresholds are likely to change the next time we update our Responsible Scaling Policy.
What’s the minimum X% that could replace 30% and would be treated the same as passing the yellow line immediately, if any? If you think that there’s an X% chance that with 3 more months of elicitation, a yellow line will be crossed, what’s the decision-making process for determining whether you should treat it as already being crossed?
In the RSP it says “It is important that we are evaluating models with close to our best capabilities elicitation techniques, to avoid underestimating the capabilities it would be possible for a malicious actor to elicit if the model were stolen” so it seems like folding in some forecasted elicited capabilities into the current evaluation would be reasonable (though they should definitely be discounted the further out they are).
(I’m not particularly concerned about catastrophic risk from the Claude 3 model family, but I am interested in the general policy here and the reasoning behind it)
The word “overconfident” seems overloaded. Here are some things I think that people sometimes mean when they say someone is overconfident:
They gave a binary probability that is too far from 50% (I believe this is the original one)
They overestimated a binary probability (e.g. they said 20% when it should be 1%)
Their estimate is arrogant (e.g. they say there’s a 40% chance their startup fails when it should be 95%), or maybe they give an arrogant vibe
They seem too unwilling to change their mind upon arguments (maybe their credal resilience is too high)
They gave a probability distribution that seems wrong in some way (e.g. “50% AGI by 2030 is so overconfident, I think it should be 10%”)
This one is pernicious in that any probability distribution gives very low percentages for some range, so being specific here seems important.
Their binary estimate or probability distribution seems too different from some sort of base rate, reference class, or expert(s) that they should defer to.
How much does this overloading matter? I’m not sure, but one worry is that it allows people to score cheap rhetorical points by claiming someone else is overconfident when in practice they might mean something like “your probability distribution is wrong in some way”. Beware of accusing someone of overconfidence without being more specific about what you mean.
I think 356 or more people in the population needed to make there be a >5% of 2+ deaths in a 2 month span from that population
[cross-posting from blog]
I made a spreadsheet for forecasting the 10th/50th/90th percentile for how you think GPT-4.5 will do on various benchmarks (given 6 months after the release to allow for actually being applied to the benchmark, and post-training enhancements). Copy it here to register your forecasts.
If you’d prefer, you could also use it to predict for GPT-5, or for the state-of-the-art at a certain time e.g. end of 2024 (my predictions would be pretty similar for GPT-4.5, and end of 2024).
You can see my forecasts made with ~2 hours of total effort on Feb 17 in this sheet; I won’t describe them further here in order to avoid anchoring.
There might be a similar tournament on Metaculus soon, but not sure on the timeline for that (and spreadsheet might be lower friction). If someone wants to take the time to make a form for predicting, tracking and resolving the forecasts, be my guest and I’ll link it here.
This is indeed close enough to Epoch’s median estimate of 7.7e25 FLOPs for Gemini Ultra 1.0 (this doc cites an Epoch estimate of around 9e25 FLOPs).
FYI at the time that doc was created, Epoch had 9e25. Now the notebook says 7.7e25 but their webpage says 5e25. Will ask them about it.
Interesting, thanks for clarifying. It’s not clear to me that this is the right primary frame to think about what would happen, as opposed to just thinking first about how big compute bottlenecks are and then adjusting the research pace for that (and then accounting for diminishing returns to more research).
I think a combination of both perspectives is best, as the argument in your favor for your frame is that there will be some low-hanging fruit from changing your workflow to adapt to the new cognitive labor.
Physical bottlenecks still exist, but is it really that implausible that the capabilities workforce would stumble upon huge algorithmic efficiency improvements? Recall that current algorithms are much less efficient than the human brain. There’s lots of room to go.
I don’t understand the reasoning here. It seems like you’re saying “Well, there might be compute bottlenecks, but we have so much room left to go in algorithmic improvements!” But the room to improve point is already the case right now, and seems orthogonal to the compute bottlenecks point.
E.g. if compute bottlenecks are theoretically enough to turn the 5x cognitive labor into only 1.1x overall research productivity, it will still be the case that there is lots of room for improvement but the point doesn’t really matter as research productivity hasn’t sped up much. So to argue that the situation has changed dramatically you need to argue something about how big of a deal the compute bottlenecks will in fact be.
Imagine the current AGI capabilities employee’s typical work day. Now imagine they had an army of AI assisstants that can very quickly do 10 hours worth of their own labor. How much more productive is that employee compared to their current state? I’d guess at least 5x. See section 6 of Tom Davidson’s takeoff speeds framework for a model.
Can you elaborate how you’re translating 10-hour AI assistants into a 5x speedup using Tom’s CES model?
I agree that <15% seems too low for most reasonable definitions of 1-10 hours and the singularity. But I’d guess I’m more sympathetic than you, depending on the definitions Nathan had in mind.
I think both of the phrases “AI capable doing tasks that took 1-10 hours” and “hit the singularity” are underdefined and making them more clear could lead to significantly different probabilities here.
For “capable of doing tasks that took 1-10 hours in 2024”:
If we’re saying that “AI can do every cognitive task that takes a human 1-10 hours in 2024 as well as (edit: the best)
ahuman expert”, I agree it’s pretty clear we’re getting extremely fast progress at that point not least because AI will be able to do the vast majority of tasks that take much longer than that by the time it can do all of 1-10 hour tasks.However, if we’re using a weaker definition like the one Richard used on most cognitive tasks, it beats most human experts who are given 1-10 hours to perform the task, I think it’s much less clear due to human interaction bottlenecks.
Also, it seems like the distribution of relevant cognitive tasks that you care about changes a lot on different time horizons, which further complicates things.
Re: “hit the singularity”, I think in general there’s little agreement on a good definition here e.g. the definition in Tom’s report is based on doubling time of “effective compute in 2022-FLOP” shortening after “full automation”, which I think is unclear what it corresponds to in terms of real-world impact as I think both of these terms are also underdefined/hard to translate into actual capability and impact metrics.
I would be curious to hear the definitions you and Nathan had in mind regarding these terms.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
This is clarifying for me, appreciate it. If I believed (a) that we needed a paradigm shift like the ones to LLMs in order to get AI systems resulting in substantial AI R&D speedup, and (b) that trend extrapolation from benchmark data would not be informative for predicting these paradigm shifts, then I would agree that the benchmarks + gaps method is not particularly informative.
Do you think that’s a fair summary of (this particular set of) necessary conditions?
(edit: didn’t see @Daniel Kokotajlo’s new comment before mine. I agree with him regarding disagreeing with both sub-claims but I think I have a sense of where you’re coming from.)