Yes, but I don’t think he claims to have a better forecasting track record than them. I think he would say he is epistemically better in general, but as you say he doesn’t participate on Metaculus, he barely has any track record to speak of, so he’d have to be pretty delusional to think his track record is better.
I too would claim such a thing, or something similar at least—I’d say that my forecasts about AGI are better than the typical Metaculus forecast about AGI; however, I would not claim to have a great forecasting track record or even a better forecasting track record than Metaculus, because (a) I don’t have much of a track record at all, and (b) there are lots of other non-AGI questions on metaculus and on those questions I expect to do worse than Metaculus on average, lacking expertise as I do. (Alas, the AGI questions have mostly not resolved yet and will not resolve for some years, so we can’t just check those.)
Yes, I agree with the points you make about e.g. the importance of track records, the importance of betting, etc. etc. No, I don’t expect you to take my word for anything (or Yudkowsky’s). Yes, I think it’s reasonable for outsiders / people who aren’t super familiar with the literature on AI to defer to Metaculus instead of me or Yudkowsky.
If I saw a Yudkowsky tweet saying “I have a great forecasting track record” or “I have a better forecasting track record than Metaculus” my immediate reaction would be “Lol no you don’t fuck off.” When I read the first few lines of your post, I expected to shortly see a pic of such a tweet as proof. In anticipation my “lol fuck you Yudkowsky” reaction already began to rise within me.
But then when I saw the stuff you actually quoted, it seemed… much more reasonable? In particular, him dumping on Metaculus for updating so hard on Gato seemed… correct? Metaculus really should have updated earlier, Gato just put together components that were already published in the last few years. So then I felt that if I had only skimmed the first part of your post and not read the actual post, I would have had an unfairly negative opinion of Yudkowsky, due to the language you used: “He has several times claimed to have a great forecasting track record.”
For what it’s worth, I agree that Yudkowsky is pretty rude and obnoxious & that he should probably get off Twitter if this is how he’s going to behave. Like, yes, he has alpha about this AI stuff; he gets to watch as the “market” gradually corrects and converges to his position. Yay. Good for him. But he’s basically just stroking his own ego by tweeting about it here; I don’t see any altruistic purpose served by it.
I am a forecaster on that question: the main doubt I had was if/when someone would try to do wordy things + game playing on a “single system”. Seemed plausible to me that this particular combination of capabilities never became an exciting area of research, so the date at which an AI can first do these things would then be substantially after this combination of tasks would be achievable with focused effort. Gato was a substantial update because it does exactly these tasks, so I no longer see much reason possibility that the benchmark is achieved only after the capabilities are substantially overshot.
I also tend to defer somewhat to the community.
I was at 2034 when the community was at 2042, and I updated further to 2026 on the Gato news.
That’s good feedback. I can see why the wording I used gives the wrong impression—he didn’t literally say out loud that he has “a great forecasting track record”. It still seems to me heavily implied by several things he’s said, especially what he said to Paul.
I think the point you raise is valid enough. I have crossed out the word “claimed” in the essay, and replaced it with “implied”.
I claim that I came off better than Robin Hanson in our FOOM debate compared to the way that history went. I’d claim that my early judgments of the probable importance of AGI, at all, stood up generally better than early non-Yudkowskian EA talking about that.
...He’s saying something notably positive about some sort of track record. That plus the comments he made about the Metaculus updates, and he clearly thinks he’s been doing well. Yes, he doesn’t have a track record on Metaculus (I’m not even aware of him having a profile). But if I just read what he writes and see what he’s implying, he thinks he’s doing much better at predicting events than somebody, and many of those somebodys seem to be people closer to Hanson’s view, and also seem to be Metaculus predictors.
Also, perhaps I’m using the word “great” more informally than you in this context.
As you point out, however, this exercise of looking at what was said and retrospectively judging whose worldview seemed “less surprised” by what happened is definitely not the same thing as a forecasting track record. It’s too subjective; rationalizing why your views are “less surprised” by what happened than some other view (without either view having specifically predicted what happened), is not hugely more difficult than rationalizing your views in the first place.
I think the passage you quote there is just totally correct though. If you turn the clock back ten years or more to when all that stuff was happening, Yudkowsky was the “AGI is really important and coming sooner than you think” end of the spectrum, and the other side seemed to be “AGI is either not ever going to be a thing, or not ever going to be important” and then the median opinion was something like “Plausibly it’ll be an important thing but it’s coming 50 − 100 years from now.” At least that’s my impression from the 9-ish years I’ve been lurking on LW and the 7-ish years I’ve been talking to people in the community. (gosh I’m old.)
In the passage you quote I interpret Yud as saying that when you compare his claims about AGI back then to claims that other rationalists and EAs were making, people like Hanson, with the benefit of hindsight his look closer to the truth. I think that’s correct. Of course the jury is still out, since most of the claims on both sides were about things that haven’t happened yet (AGI is still not here) but e.g. it’s looking pretty unlikely that uploads/ems will come first, it’s looking pretty unlikely that AGI will be an accumulation of specialized modules built by different subcontractors (like an f-35 fighter jet lol), it’s looking pretty likely that it’ll happen in the 20′s or 30′s instead of the 60′s or 70′s… most of all, it’s looking pretty likely that it’ll be a Big Deal, something we all should be thinking about and preparing for now.
On overall optimism it seems clear that Eliezer won—Robin seems unusually bad, while Eliezer seems unusually good. I also think on “domain-specific engineering” vs “domain-general engineering” Eliezer looks unusually good while Robin looks typical.
But I think there are also comparably-important substantive claims that look quite bad. I don’t think Eliezer has an unambiguous upper hand in the FOOM debate a all:
The debate was about whether a small group could quickly explode to take over the world. AI development projects are now billion-dollar affairs and continuing to grow quickly, important results are increasingly driven by giant projects, and 9 people taking over the world with AI looks if anything even more improbable and crazy than it did then. Now we’re mostly talking about whether a $10 trillion company can explosively grow to $300 trillion as it develops AI, which is just not the same game in any qualitative sense. I’m not sure Eliezer has many precise predictions he’d stand behind here (setting aside the insane pre-2002 predictions), so it’s not clear we can evaluate his track record, but I think they’d look bad if he’d made them. This is really one of the foundational claims of Robin’s worldview and one of the biggest things he’s objecting to about Eliezer’s story.
I think the “secret sauce of intelligence” view is looking worse and worse, as is the “village idiot to Einstein is no gap at all view.” Again, I’m not sure whether Eliezer ever articulated this into concrete predictions but if he did I think they would look bad. It now seems very likely that we will have AI systems that can contribute meaningfully to R&D before they “wake up” in Eliezer’s sense, and that their contributions will look more like normal human contributions. (This may be followed by an Eliezer-style takeoff, but at this point that looks more like a subsequent round of the singularity, after crazy acceleration and transformation caused by more mundane AI—as is the case in Robin’s best guess story). Similarly, we are seeing a lot of AI systems at intermediate levels of capability that Eliezer appeared to consider unlikely, e.g. AI systems that can write a bit of code or perform mediocrely on programming competitions but aren’t geniuses, who can speak clearly and understand the world OK but whose impact is only modest, who take multiple years to cross the human range at many tasks and games.
Even on the timelines stuff, I don’t think you should just give Eliezer a pass for earlier even more aggressive technological predictions, or to give him too much credit for a specific prediction when he didn’t put his neck out in a way that would be wrong if AI hadn’t made a lot of progress in the last 10 years. I think this is at an extremely high risk of a very boring kind of hindsight bias. (This gets me in particular, because Eliezer’s actual track record on AI timelines seems to me to be so much worse than the historical bioanchors people he insults.)
It seems reasonable to dunk on Hanson for liking CYC, but I think most people would say that CYC is a lot closer to the right idea than Eurisko is, and that “learn how to engineer from human” is a lot closer than “derive it all from first principles.” Again, hard to evaluate any of these things, since Eliezer is not saying much that’s specific enough to be wrong, but he’s also not saying much that’s specific enough to be right. I think that large language models just qualitatively don’t look anything like the kind of AI that Eliezer describes.
Overall I really think we’re in the regime where selective interpretation and emphasis can very easily let either side think they had the upper hand here.
(Note that I also think this is true about a lot of my “prediction track record.” I think the biggest difference is that I’m just less smug and condescending about it, given how unclear the record really is, and don’t constantly dismiss people as “the kind of reasoning that doesn’t work here in the real world”—I try to mostly bring it up when someone like Eliezer is making a big point of their own track record.)
Robin on AI timelines just seems particularly crazy. We can’t yet settle the ems vs de novo AI bet, but I think the writing is on the wall, and his forecasting methodology for the 300 year timeline seems so crazy—ask people in a bunch of fields “how far have you come to hman level, is it speeding up?” and then lean entirely on that (I think many of the short-term predictions are basically falsified now, in that if you ask people the same question they will give much higher percentages and many of the tasks are solved).
ETA: Going through the oldest examples from Robin’s survey to see how the methodology fares:
Melanie Mitchell gives 5% progress in 20 years towards human-level analogical reasoning. But the kinds of string manipulation used in Mitchell’s copycat problem seems to just be ~totally solved by the current version of the OpenAI API. (I tried 10 random questions from this list, and the only one it got wrong was “a → ab, z → ?” where it said “z → z b” instead of what I presume was the intended “z → z y”. And in general it seems like we’ve come quite a long way.
Murray Shanahan gives 10% progress on “knowledge representation” in 20 years, but I don’t know what this means so I’ll skip over it.
Wendy Hall gives 1% on “computer-assisted training” in 20 years. I don’t know how to measure progress in this area, but I suspect any reasonable measure for the last 10 years will be >> 5%.
Claire Cardie and Peter Norvig give 20% progress on NLP in 20 years. I think that 2013-2023 has seen much more than another 20% progress, in that it’s now becoming difficulty to write down any task in NLP for which models have subhuman performance (and instead we just use language to express increasingly-difficult non-language tasks)
Aaron Dollar gives <1% on robotic grasping in 20 years. Hard to evaluate quantitatively, but seems very hard to argue we’ve come <10% more of the way in the last 10 years and in simulation I think we may just be roughly human level.
Timothy Meese gives 5% progress on early human vision processing in 20 years, but I think it now seems like we are quite close to (or even past) human level at this task. Though I think maybe he’s talking about how much we understand early human vision process (in which case not clear what it’s doing in this list).
At any rate, the methodology looks to me like it’s making terrible predictions all over the place. I think I’m on the record objecting that these estimates seem totally unreasonable, though the only comment I can find by me on it is 5 years ago here where I say I don’t think it’s informative and give >20% that existing ML in particular will scale to human-level AI.
Regarding the weird mediocrity of modern AI, isn’t part of this that GPT-3-style language models are almost aiming for mediocrity?
Would a hypothetical “AlphaZero of code” which built its own abstractions from the ground up—and presumably would not reinvent Python (AlphaCode is cool and all, but it does strike me as a little absurd to see an AI write Python) - have this property?
Game-playing AI is also mediocre, as are models fine-tuned to write good code. 100B parameter models trained from scratch to write code (rather than to imitate human coders) would be much better but would take quite a lot longer to train, and I don’t see any evidence that they would spend less time in the mediocre subhuman regime (though I do agree that they would more easily go well past human level).
The debate was about whether a small group could quickly explode to take over the world. AI development projects are now billion-dollar affairs and continuing to grow quickly, important results are increasingly driven by giant projects, and 9 people taking over the world with AI looks if anything even more improbable and crazy than it did then.
Maybe you mean something else there, but wasn’t Open AI like 30 people when they released GPT-2 and maybe like 60 when they released GPT-3? This doesn’t seem super off from 9 people, and my guess is there is probably a subset of 9 people that you could poach from OpenAI that could have made 80% as fast progress on that research as the full set of 30 people (at least from talking to other people at OpenAI, my sense is that contributions are very heavy-tailed)?
Like, my sense is that cutting-edge progress is currently made by a few large teams, but that cutting-edge performance can easily come from 5-10 person teams, and that if we end up trying to stop race-dynamics, that the risk from 5-10 person teams would catch up pretty quickly with the risk from big teams, if the big teams halted progress. It seems to me that if I sat down with 8 other smart people, I could probably build a cutting-edge system within 1-2 years. The training cost of modern systems are only in the 10 million range, which is well within the reach of a 10 person team.
Of course, we might see that go up, but I feel confused about why you are claiming that 10 person teams building systems that are at the cutting-edge of capabilities and therefore might pose substantial risk is crazy.
GPT-2 is very far from taking over the world (and was indeed <<10 people). GPT-3 was bigger (though still probably <10 people depending how you amortize infrastructure), and remains far from taking over the world. Modern projects are >10 people, and still not yet taking over the world. It looks like it’s already not super plausible for 10 people to catch up, and it’s rapidly getting less plausible. The prediction isn’t yet settled, but neither are the predictions in Eliezer’s favor, and it’s clear which way the wind blows.
These projects are well-capitalized, with billions of dollars in funding now and valuations rapidly rising (though maybe a dip right now with tech stocks overall down ~25%). These projects need to negotiate absolutely massive compute contracts, and lots of the profit looks likely to flow to compute companies. Most of the work is going into the engineering aspects of these projects. There are many labs with roughly-equally-good approaches, and no one has been able to pull much ahead of the basic formula—most variation is explained by how big a bet different firms are willing to make.
Eliezer is not talking about 10 people making a dominant AI because the rest of the world is being busy slowing down out of concern for AI risk, he is talking about the opposite situation of 10 people making a dominant AI which is also safer, while people are barreling ahead, which is possible because they are building AI in a better way. In addition to 10 people, the view “you can find a better way to build AI that’s way more efficient than other people” is also starting to look increasingly unlikely as performance continues to be dominated by scale and engineering rather than clever ideas.
Everything is vague enough that I might be totally misunderstanding the view, and there is a lot of slack in how you compare it to reality. But for me the most basic point is that this is not a source of words about the future that I personally should be listening to; if there is a way to turn these words into an accurate visualization of the future, I lack the machinery to do so.
(The world I imagined when I read Robin’s words also looks different from the world of today in a lot of important ways. But it’s just not such a slam dunk comparing them, on this particular axis it sure looks more like Robin’s world to me. I do wish that people had stated some predictions so we could tell precisely rather than playing this game.)
Historical track record of software projects is that it’s relatively common that a small team of ~10 people outperforms 1000+ person teams. Indeed, I feel like this is roughly what happened with Deepmind and OpenAI. I feel like in 2016 you could have said that current AGI projects already have 500+ employees and are likely to grow even bigger and so it’s unlikely that a small 10-person team could catch up, and then suddenly the most cutting-edge project was launched by a 10-person team. (Yes, that 10 person team needed a few million dollars, but a few million dollars are not that hard to come by in the tech-sector).
My current guess is that we will continue to see small 10-person teams push the cutting-edge forward in AI, just as we’ve seen the same in most other domains of software.
In addition to 10 people, the view “you can find a better way to build AI that’s way more efficient than other people” is also starting to look increasingly unlikely as performance continues to be dominated by scale and engineering rather than clever ideas.
I do agree with this in terms of what has been happening in the last few years, though I do expect this to break down as we see more things in the “leveraging AI to improve AI development progress” and “recursive self-improvement stuff” categories, which seem to currently enter the horizon. It already seems pretty plausible to me that a team that had exclusive access to a better version of Codex has some chance of outperforming other software development teams by 3-5x, which would then feed into more progress on the performance of the relevant systems.
I do think this is substantially less sharp than what Eliezer was talking about at the time, but I personally find the “a small team of people who will use AI tools to develop AI systems, can vastly outperform large teams that are less smart about it” hypothesis pretty plausible, and probably more likely than not for what will happen eventually.
I think “team uses Codex to be 3x more productive” is more like the kind of thing Robin is talking about than the kind of thing Eliezer is talking about (e.g. see the discussion of UberTool, or just read the foom debate overall). And if you replace 3x with a more realistic number, and consider the fact that right now everyone is definitely selling that as a product rather than exclusively using it internally as a tool, then it’s even more like Robin’s story.
Everyone involved believes in the possibility of tech startups, and I’m not even sure if they have different views about the expected returns to startup founders. The 10 people who start an AI startup can make a lot of money, and will typically grow to a large scale (with significant dilution, but still quite a lot of influence for founders) before they make their most impressive AI systems.
I think this kind of discussion seems pretty unproductive, and it mostly just reinforces the OP’s point that people should actually predict something about the world if we want this kind of discussion to be remotely useful for deciding how to change beliefs as new evidence comes in (at least about what people / models / reasoning strategies work well). If you want to state any predictions about the next 5 years I’m happy to disagree with them.
The kinds of thing I expect are that (i) big models will still be where it’s at, (ii) compute budgets and team sizes continue to grow, (iii) improvements from cleverness continue to shrink, (iv) influence held by individual researchers grows in absolute terms but continues to shrink in relative terms, (v) AI tools become 2x more useful over more like a year than a week, (vi) AI contributions to AI R&D look similar to human contributions in various ways. Happy to put #s on those if you want to disagree on any. Places where I agree with the foom story are that I expect AI to be applied differentially to AI R&D, I expect the productivity of individual AI systems to scale relatively rapidly with compute and R&D investment, I expect overall progress to qualitatively be large, and so on.
Methodology: I copied text from the contributors page (down to just before it says “We also acknowledge and thank every OpenAI team member”), used some quick Emacs keyboard macros to munge out the section headers and non-name text (like “[topic] lead”), deduplicated and counted in Python (and subtracted one for a munging error I spotted after the fact), and got 290. Also, you might not count some sections of contributors (e.g., product management, legal) as relevant to your claim.
Yep, that is definitely counterevidence! Though my model did definitely predict that we would also continue seeing huge teams make contributions, but of course each marginal major contribution is still evidence.
I have more broadly updated against this hypothesis over the past year or so, though I still think there will be lots of small groups of people quite close to the cutting edge (like less than 12 months behind).
Currently the multiple on stuff like better coding tools and setting up development to be AI-guided just barely entered the stage where it feels plausible that a well-set-up team could just completely destroy large incumbents. We’ll see how it develops in the next year or so.
It seems to me that if I sat down with 8 other smart people, I could probably build a cutting-edge system within 1-2 years.
If you’re not already doing machine learning research and engineering, I think it takes more than two years of study to reach the frontier? (The ordinary software engineering you use to build Less Wrong, and the futurism/alignment theory we do here, are not the same skills.)
As my point of comparison for thinking about this, I have a couple hundred commits in Rust, but I would still feel pretty silly claiming to be able to build a state-of-the-art compiler in 2 years with 7 similarly-skilled people, even taking into account that a lot of the work is already done by just using LLVM (similar to how ML projects can just use PyTorch or TensorFlow).
Is there some reason to think AGI (!) is easier than compilers? I think “newer domain, therefore less distance to the frontier” is outweighed by “newer domain, therefore less is known about how to get anything to work at all.”
If you’re not already doing machine learning research and engineering, I think it takes more than two years of study to reach the frontier? (The ordinary software engineering you use to build Less Wrong, and the futurism/alignment theory we do here, are not the same skills.)
Yeah, to be clear, I think I would try hard to hire some people with more of the relevant domain-knowledge (trading off against some other stuff). I do think I also somewhat object to it taking such a long time to get the relevant domain-knowledge (a good chunk of people involved in GPT-3 had less than two years of ML experience), but it doesn’t feel super cruxy for anything here, I think?
“newer domain, therefore less is known about how to get anything to work at all.”
To be clear, I agree with this, but I think this mostly pushes towards making me think that small teams with high general competence will be more important than domain-knowledge. But maybe you meant something else by this.
I think the argument “newer domain hence nearer frontier” still holds. The fact that we don’t know how to make an AGI doesn’t bear on how much you need to learn to match an expert.
Now we’re mostly talking about whether a $10 trillion company can explosively grow to $300 trillion as it develops AI, which is just not the same game in any qualitative sense.
To be clear, this is not the scenario that I worry about, and neither is it the scenario most other people I talk about AI Alignment tend to worry about. I recognize there is disagreement within the AI Alignment community here, but this sentence sounds like it’s some kind of consensus, when I think it clearly isn’t. I don’t expect we will ever see a $300 trillion company before humanity goes extinct.
I’m just using $300 trillion as a proxy for “as big as the world.” The point is that we’re now mostly talking about Google building TAI with relatively large budgets.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
Well, $300 trillion seems like it assumes that offense is about similarly hard to defense, in this analogy. Russia launching a nuclear attack on the U.S. and this somehow chaining into a nuclear winter that causes civilizational collapse, does not imply that Russia has “grown to $300 trillion”. Similarly, an AI developing a bioweapon that destroys humanity’s ability to coordinate or orient and kills 99% of the population using like $5000, and then rebuild over the course of a few years without humans around, also doesn’t look at all like “explosive growth to $300 trillion”.
This seems important since you are saying that “[this] is just not the same game in any qualitative sense”, whereas I feel like something like the scenario above seems most likely, we haven’t seen much evidence to suggest it’s not what’s going to happen, and sounds quite similar to what Eliezer was talking about at the time. Like, I think de-facto probably an AI won’t do an early strike like this that only kills 99% of the population, and will instead wait for longer to make sure it can do something that has less of a chance of failure, but the point-of-no-return will have been crossed when a system first had the capability to kill approximately everyone.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
I agree with this. I agree that it seems likely that model sizes will continue going up, and that cutting-edge performance will probably require at least on the order of $100M in a few years, though it’s not fully clear how much of that money is going to be wasted, and how much a team could reproduce the cutting-edge results without access to the full $100M. I do think in as much as it will come true, this does make me more optimistic that cutting edge capabilities will at least have like 3 years of lead, before a 10 person team could reproduce something for a tenth of the cost (which my guess is probably currently roughly what happened historically?).
Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.
Good points; those do seem to be cases in which Hanson comes out better. As you say, it comes down to how heavily you weight the stuff Yudkowsky beat Hanson on vs. the stuff Hanson beat Yudkowsky on. I also want to reiterate that I think Yudkowsky is being obnoxious.
(I also agree that the historical bio anchors people did remarkably well & much better than Yudkowsky.)
Note that I feel like, if we look at the overall disagreements in 2008, Eliezer’s view overall seems better than Robin’s. So I think we’re probably on the same page here.
Regarding the Einstein stuff, do you think Einstein’s brain had significantly more compute than a 100 IQ person? I would be very surprised if Einstein had more than, I don’t know, twice the computing power of a normal brain. And so the claim that the distance between an idiot and Einstein is far smaller than that between the idiot and a chimpanzee still seems true to me.
I’d guess that Einstein’s brain probably used something like 10% more compute than the median person.
But to the extent that there is any prediction it’s about how good software will be, and how long it will spend in the range between mediocre and excellent human performance, rather than about how big Einstein’s brain is. And that prediction seems to be faring poorly across domains.
This is a great list and I thank you for describing it. Good examples of one of the claims I’m making—there’s nothing about their debate that tells us much meaningful about Eliezer’s forecasting track record. In fact I would like to link to this comment in the original post because it seems like important supplemental material, for people who are convinced that the debate was one-sided.
I agree about ems being nowhere in sight, versus steady progress in other methods. I also disagree with Hanson about timeframe (I don’t see it taking 300 years). I also agree that general algorithms will be very important, probably more important than Hanson said. I also put a lower probability on a prolonged AI winter than Hanson.
But as you said, AGI still isn’t here. I’d take it a step further—did the Hanson debates even have unambiguous, coherent ideas of what “AGI” refers to?
Of progress toward AGI, “how much” happened since the Hanson debate? This is thoroughly nebulous and gives very little information about a forecasting track record, even though I disagree with Hanson. With the way Eliezer is positioned in this debate, he can just point to any impressive developments, and say that goes in his favor. We have practically no way of objectively evaluating that. If someone already agrees “the event happened”, they update that Eliezer got it right. If they disagree, or if they aren’t sure what the criteria were, they don’t.
Being able to say post-hoc say that Eliezer “looks closer to the truth” is very different from how we measure forecasting performance, and for good reason. If I was judging this, the “prediction” absolutely resolves as “ambiguous”, despite me disagreeing with Hanson on more points in their debate.
Yes, but I don’t think he claims to have a better forecasting track record than them. I think he would say he is epistemically better in general, but as you say he doesn’t participate on Metaculus, he barely has any track record to speak of, so he’d have to be pretty delusional to think his track record is better.
I too would claim such a thing, or something similar at least—I’d say that my forecasts about AGI are better than the typical Metaculus forecast about AGI; however, I would not claim to have a great forecasting track record or even a better forecasting track record than Metaculus, because (a) I don’t have much of a track record at all, and (b) there are lots of other non-AGI questions on metaculus and on those questions I expect to do worse than Metaculus on average, lacking expertise as I do. (Alas, the AGI questions have mostly not resolved yet and will not resolve for some years, so we can’t just check those.)
Yes, I agree with the points you make about e.g. the importance of track records, the importance of betting, etc. etc. No, I don’t expect you to take my word for anything (or Yudkowsky’s). Yes, I think it’s reasonable for outsiders / people who aren’t super familiar with the literature on AI to defer to Metaculus instead of me or Yudkowsky.
Perhaps this explains my position better:
If I saw a Yudkowsky tweet saying “I have a great forecasting track record” or “I have a better forecasting track record than Metaculus” my immediate reaction would be “Lol no you don’t fuck off.” When I read the first few lines of your post, I expected to shortly see a pic of such a tweet as proof. In anticipation my “lol fuck you Yudkowsky” reaction already began to rise within me.
But then when I saw the stuff you actually quoted, it seemed… much more reasonable? In particular, him dumping on Metaculus for updating so hard on Gato seemed… correct? Metaculus really should have updated earlier, Gato just put together components that were already published in the last few years. So then I felt that if I had only skimmed the first part of your post and not read the actual post, I would have had an unfairly negative opinion of Yudkowsky, due to the language you used: “He has several times claimed to have a great forecasting track record.”
For what it’s worth, I agree that Yudkowsky is pretty rude and obnoxious & that he should probably get off Twitter if this is how he’s going to behave. Like, yes, he has alpha about this AI stuff; he gets to watch as the “market” gradually corrects and converges to his position. Yay. Good for him. But he’s basically just stroking his own ego by tweeting about it here; I don’t see any altruistic purpose served by it.
I am a forecaster on that question: the main doubt I had was if/when someone would try to do wordy things + game playing on a “single system”. Seemed plausible to me that this particular combination of capabilities never became an exciting area of research, so the date at which an AI can first do these things would then be substantially after this combination of tasks would be achievable with focused effort. Gato was a substantial update because it does exactly these tasks, so I no longer see much reason possibility that the benchmark is achieved only after the capabilities are substantially overshot.
I also tend to defer somewhat to the community.
I was at 2034 when the community was at 2042, and I updated further to 2026 on the Gato news.
That’s good feedback. I can see why the wording I used gives the wrong impression—he didn’t literally say out loud that he has “a great forecasting track record”. It still seems to me heavily implied by several things he’s said, especially what he said to Paul.
I think the point you raise is valid enough. I have crossed out the word “claimed” in the essay, and replaced it with “implied”.
OK, thanks!
Well, when he says something like this:
...He’s saying something notably positive about some sort of track record. That plus the comments he made about the Metaculus updates, and he clearly thinks he’s been doing well. Yes, he doesn’t have a track record on Metaculus (I’m not even aware of him having a profile). But if I just read what he writes and see what he’s implying, he thinks he’s doing much better at predicting events than somebody, and many of those somebodys seem to be people closer to Hanson’s view, and also seem to be Metaculus predictors.
Also, perhaps I’m using the word “great” more informally than you in this context.
As an example of the kind of point that one might use in deciding who “came off better” in the FOOM debate, Hanson predicted that “AIs that can parse and use CYC should be feasible well before AIs that can parse and use random human writings”, which seems pretty clearly falsified by large language models—and that also likely bears on Hanson’s view that “[t]he idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy”.
As you point out, however, this exercise of looking at what was said and retrospectively judging whose worldview seemed “less surprised” by what happened is definitely not the same thing as a forecasting track record. It’s too subjective; rationalizing why your views are “less surprised” by what happened than some other view (without either view having specifically predicted what happened), is not hugely more difficult than rationalizing your views in the first place.
There was a lot of other stuff in that debate.
I think the passage you quote there is just totally correct though. If you turn the clock back ten years or more to when all that stuff was happening, Yudkowsky was the “AGI is really important and coming sooner than you think” end of the spectrum, and the other side seemed to be “AGI is either not ever going to be a thing, or not ever going to be important” and then the median opinion was something like “Plausibly it’ll be an important thing but it’s coming 50 − 100 years from now.” At least that’s my impression from the 9-ish years I’ve been lurking on LW and the 7-ish years I’ve been talking to people in the community. (gosh I’m old.)
In the passage you quote I interpret Yud as saying that when you compare his claims about AGI back then to claims that other rationalists and EAs were making, people like Hanson, with the benefit of hindsight his look closer to the truth. I think that’s correct. Of course the jury is still out, since most of the claims on both sides were about things that haven’t happened yet (AGI is still not here) but e.g. it’s looking pretty unlikely that uploads/ems will come first, it’s looking pretty unlikely that AGI will be an accumulation of specialized modules built by different subcontractors (like an f-35 fighter jet lol), it’s looking pretty likely that it’ll happen in the 20′s or 30′s instead of the 60′s or 70′s… most of all, it’s looking pretty likely that it’ll be a Big Deal, something we all should be thinking about and preparing for now.
On overall optimism it seems clear that Eliezer won—Robin seems unusually bad, while Eliezer seems unusually good. I also think on “domain-specific engineering” vs “domain-general engineering” Eliezer looks unusually good while Robin looks typical.
But I think there are also comparably-important substantive claims that look quite bad. I don’t think Eliezer has an unambiguous upper hand in the FOOM debate a all:
The debate was about whether a small group could quickly explode to take over the world. AI development projects are now billion-dollar affairs and continuing to grow quickly, important results are increasingly driven by giant projects, and 9 people taking over the world with AI looks if anything even more improbable and crazy than it did then. Now we’re mostly talking about whether a $10 trillion company can explosively grow to $300 trillion as it develops AI, which is just not the same game in any qualitative sense. I’m not sure Eliezer has many precise predictions he’d stand behind here (setting aside the insane pre-2002 predictions), so it’s not clear we can evaluate his track record, but I think they’d look bad if he’d made them. This is really one of the foundational claims of Robin’s worldview and one of the biggest things he’s objecting to about Eliezer’s story.
I think the “secret sauce of intelligence” view is looking worse and worse, as is the “village idiot to Einstein is no gap at all view.” Again, I’m not sure whether Eliezer ever articulated this into concrete predictions but if he did I think they would look bad. It now seems very likely that we will have AI systems that can contribute meaningfully to R&D before they “wake up” in Eliezer’s sense, and that their contributions will look more like normal human contributions. (This may be followed by an Eliezer-style takeoff, but at this point that looks more like a subsequent round of the singularity, after crazy acceleration and transformation caused by more mundane AI—as is the case in Robin’s best guess story). Similarly, we are seeing a lot of AI systems at intermediate levels of capability that Eliezer appeared to consider unlikely, e.g. AI systems that can write a bit of code or perform mediocrely on programming competitions but aren’t geniuses, who can speak clearly and understand the world OK but whose impact is only modest, who take multiple years to cross the human range at many tasks and games.
Even on the timelines stuff, I don’t think you should just give Eliezer a pass for earlier even more aggressive technological predictions, or to give him too much credit for a specific prediction when he didn’t put his neck out in a way that would be wrong if AI hadn’t made a lot of progress in the last 10 years. I think this is at an extremely high risk of a very boring kind of hindsight bias. (This gets me in particular, because Eliezer’s actual track record on AI timelines seems to me to be so much worse than the historical bioanchors people he insults.)
It seems reasonable to dunk on Hanson for liking CYC, but I think most people would say that CYC is a lot closer to the right idea than Eurisko is, and that “learn how to engineer from human” is a lot closer than “derive it all from first principles.” Again, hard to evaluate any of these things, since Eliezer is not saying much that’s specific enough to be wrong, but he’s also not saying much that’s specific enough to be right. I think that large language models just qualitatively don’t look anything like the kind of AI that Eliezer describes.
Overall I really think we’re in the regime where selective interpretation and emphasis can very easily let either side think they had the upper hand here.
(Note that I also think this is true about a lot of my “prediction track record.” I think the biggest difference is that I’m just less smug and condescending about it, given how unclear the record really is, and don’t constantly dismiss people as “the kind of reasoning that doesn’t work here in the real world”—I try to mostly bring it up when someone like Eliezer is making a big point of their own track record.)
Robin on AI timelines just seems particularly crazy. We can’t yet settle the ems vs de novo AI bet, but I think the writing is on the wall, and his forecasting methodology for the 300 year timeline seems so crazy—ask people in a bunch of fields “how far have you come to hman level, is it speeding up?” and then lean entirely on that (I think many of the short-term predictions are basically falsified now, in that if you ask people the same question they will give much higher percentages and many of the tasks are solved).
ETA: Going through the oldest examples from Robin’s survey to see how the methodology fares:
Melanie Mitchell gives 5% progress in 20 years towards human-level analogical reasoning. But the kinds of string manipulation used in Mitchell’s copycat problem seems to just be ~totally solved by the current version of the OpenAI API. (I tried 10 random questions from this list, and the only one it got wrong was “a → ab, z → ?” where it said “z → z b” instead of what I presume was the intended “z → z y”. And in general it seems like we’ve come quite a long way.
Murray Shanahan gives 10% progress on “knowledge representation” in 20 years, but I don’t know what this means so I’ll skip over it.
Wendy Hall gives 1% on “computer-assisted training” in 20 years. I don’t know how to measure progress in this area, but I suspect any reasonable measure for the last 10 years will be >> 5%.
Claire Cardie and Peter Norvig give 20% progress on NLP in 20 years. I think that 2013-2023 has seen much more than another 20% progress, in that it’s now becoming difficulty to write down any task in NLP for which models have subhuman performance (and instead we just use language to express increasingly-difficult non-language tasks)
Aaron Dollar gives <1% on robotic grasping in 20 years. Hard to evaluate quantitatively, but seems very hard to argue we’ve come <10% more of the way in the last 10 years and in simulation I think we may just be roughly human level.
Timothy Meese gives 5% progress on early human vision processing in 20 years, but I think it now seems like we are quite close to (or even past) human level at this task. Though I think maybe he’s talking about how much we understand early human vision process (in which case not clear what it’s doing in this list).
At any rate, the methodology looks to me like it’s making terrible predictions all over the place. I think I’m on the record objecting that these estimates seem totally unreasonable, though the only comment I can find by me on it is 5 years ago here where I say I don’t think it’s informative and give >20% that existing ML in particular will scale to human-level AI.
Regarding the weird mediocrity of modern AI, isn’t part of this that GPT-3-style language models are almost aiming for mediocrity?
Would a hypothetical “AlphaZero of code” which built its own abstractions from the ground up—and presumably would not reinvent Python (AlphaCode is cool and all, but it does strike me as a little absurd to see an AI write Python) - have this property?
Game-playing AI is also mediocre, as are models fine-tuned to write good code. 100B parameter models trained from scratch to write code (rather than to imitate human coders) would be much better but would take quite a lot longer to train, and I don’t see any evidence that they would spend less time in the mediocre subhuman regime (though I do agree that they would more easily go well past human level).
Also this.
Maybe you mean something else there, but wasn’t Open AI like 30 people when they released GPT-2 and maybe like 60 when they released GPT-3? This doesn’t seem super off from 9 people, and my guess is there is probably a subset of 9 people that you could poach from OpenAI that could have made 80% as fast progress on that research as the full set of 30 people (at least from talking to other people at OpenAI, my sense is that contributions are very heavy-tailed)?
Like, my sense is that cutting-edge progress is currently made by a few large teams, but that cutting-edge performance can easily come from 5-10 person teams, and that if we end up trying to stop race-dynamics, that the risk from 5-10 person teams would catch up pretty quickly with the risk from big teams, if the big teams halted progress. It seems to me that if I sat down with 8 other smart people, I could probably build a cutting-edge system within 1-2 years. The training cost of modern systems are only in the 10 million range, which is well within the reach of a 10 person team.
Of course, we might see that go up, but I feel confused about why you are claiming that 10 person teams building systems that are at the cutting-edge of capabilities and therefore might pose substantial risk is crazy.
GPT-2 is very far from taking over the world (and was indeed <<10 people). GPT-3 was bigger (though still probably <10 people depending how you amortize infrastructure), and remains far from taking over the world. Modern projects are >10 people, and still not yet taking over the world. It looks like it’s already not super plausible for 10 people to catch up, and it’s rapidly getting less plausible. The prediction isn’t yet settled, but neither are the predictions in Eliezer’s favor, and it’s clear which way the wind blows.
These projects are well-capitalized, with billions of dollars in funding now and valuations rapidly rising (though maybe a dip right now with tech stocks overall down ~25%). These projects need to negotiate absolutely massive compute contracts, and lots of the profit looks likely to flow to compute companies. Most of the work is going into the engineering aspects of these projects. There are many labs with roughly-equally-good approaches, and no one has been able to pull much ahead of the basic formula—most variation is explained by how big a bet different firms are willing to make.
Eliezer is not talking about 10 people making a dominant AI because the rest of the world is being busy slowing down out of concern for AI risk, he is talking about the opposite situation of 10 people making a dominant AI which is also safer, while people are barreling ahead, which is possible because they are building AI in a better way. In addition to 10 people, the view “you can find a better way to build AI that’s way more efficient than other people” is also starting to look increasingly unlikely as performance continues to be dominated by scale and engineering rather than clever ideas.
Everything is vague enough that I might be totally misunderstanding the view, and there is a lot of slack in how you compare it to reality. But for me the most basic point is that this is not a source of words about the future that I personally should be listening to; if there is a way to turn these words into an accurate visualization of the future, I lack the machinery to do so.
(The world I imagined when I read Robin’s words also looks different from the world of today in a lot of important ways. But it’s just not such a slam dunk comparing them, on this particular axis it sure looks more like Robin’s world to me. I do wish that people had stated some predictions so we could tell precisely rather than playing this game.)
Historical track record of software projects is that it’s relatively common that a small team of ~10 people outperforms 1000+ person teams. Indeed, I feel like this is roughly what happened with Deepmind and OpenAI. I feel like in 2016 you could have said that current AGI projects already have 500+ employees and are likely to grow even bigger and so it’s unlikely that a small 10-person team could catch up, and then suddenly the most cutting-edge project was launched by a 10-person team. (Yes, that 10 person team needed a few million dollars, but a few million dollars are not that hard to come by in the tech-sector).
My current guess is that we will continue to see small 10-person teams push the cutting-edge forward in AI, just as we’ve seen the same in most other domains of software.
I do agree with this in terms of what has been happening in the last few years, though I do expect this to break down as we see more things in the “leveraging AI to improve AI development progress” and “recursive self-improvement stuff” categories, which seem to currently enter the horizon. It already seems pretty plausible to me that a team that had exclusive access to a better version of Codex has some chance of outperforming other software development teams by 3-5x, which would then feed into more progress on the performance of the relevant systems.
I do think this is substantially less sharp than what Eliezer was talking about at the time, but I personally find the “a small team of people who will use AI tools to develop AI systems, can vastly outperform large teams that are less smart about it” hypothesis pretty plausible, and probably more likely than not for what will happen eventually.
I think “team uses Codex to be 3x more productive” is more like the kind of thing Robin is talking about than the kind of thing Eliezer is talking about (e.g. see the discussion of UberTool, or just read the foom debate overall). And if you replace 3x with a more realistic number, and consider the fact that right now everyone is definitely selling that as a product rather than exclusively using it internally as a tool, then it’s even more like Robin’s story.
Everyone involved believes in the possibility of tech startups, and I’m not even sure if they have different views about the expected returns to startup founders. The 10 people who start an AI startup can make a lot of money, and will typically grow to a large scale (with significant dilution, but still quite a lot of influence for founders) before they make their most impressive AI systems.
I think this kind of discussion seems pretty unproductive, and it mostly just reinforces the OP’s point that people should actually predict something about the world if we want this kind of discussion to be remotely useful for deciding how to change beliefs as new evidence comes in (at least about what people / models / reasoning strategies work well). If you want to state any predictions about the next 5 years I’m happy to disagree with them.
The kinds of thing I expect are that (i) big models will still be where it’s at, (ii) compute budgets and team sizes continue to grow, (iii) improvements from cleverness continue to shrink, (iv) influence held by individual researchers grows in absolute terms but continues to shrink in relative terms, (v) AI tools become 2x more useful over more like a year than a week, (vi) AI contributions to AI R&D look similar to human contributions in various ways. Happy to put #s on those if you want to disagree on any. Places where I agree with the foom story are that I expect AI to be applied differentially to AI R&D, I expect the productivity of individual AI systems to scale relatively rapidly with compute and R&D investment, I expect overall progress to qualitatively be large, and so on.
Yeah, I think this is fair. I’ll see whether I can come up with some good operationalizations.
Possible counterevidence (10 months later)?—the GPT-4 contributors list lists almost 300 names.[1]
Methodology: I copied text from the contributors page (down to just before it says “We also acknowledge and thank every OpenAI team member”), used some quick Emacs keyboard macros to munge out the section headers and non-name text (like “[topic] lead”), deduplicated and counted in Python (and subtracted one for a munging error I spotted after the fact), and got 290. Also, you might not count some sections of contributors (e.g., product management, legal) as relevant to your claim.
Yep, that is definitely counterevidence! Though my model did definitely predict that we would also continue seeing huge teams make contributions, but of course each marginal major contribution is still evidence.
I have more broadly updated against this hypothesis over the past year or so, though I still think there will be lots of small groups of people quite close to the cutting edge (like less than 12 months behind).
Currently the multiple on stuff like better coding tools and setting up development to be AI-guided just barely entered the stage where it feels plausible that a well-set-up team could just completely destroy large incumbents. We’ll see how it develops in the next year or so.
If you’re not already doing machine learning research and engineering, I think it takes more than two years of study to reach the frontier? (The ordinary software engineering you use to build Less Wrong, and the futurism/alignment theory we do here, are not the same skills.)
As my point of comparison for thinking about this, I have a couple hundred commits in Rust, but I would still feel pretty silly claiming to be able to build a state-of-the-art compiler in 2 years with 7 similarly-skilled people, even taking into account that a lot of the work is already done by just using LLVM (similar to how ML projects can just use PyTorch or TensorFlow).
Is there some reason to think AGI (!) is easier than compilers? I think “newer domain, therefore less distance to the frontier” is outweighed by “newer domain, therefore less is known about how to get anything to work at all.”
Yeah, to be clear, I think I would try hard to hire some people with more of the relevant domain-knowledge (trading off against some other stuff). I do think I also somewhat object to it taking such a long time to get the relevant domain-knowledge (a good chunk of people involved in GPT-3 had less than two years of ML experience), but it doesn’t feel super cruxy for anything here, I think?
To be clear, I agree with this, but I think this mostly pushes towards making me think that small teams with high general competence will be more important than domain-knowledge. But maybe you meant something else by this.
I think the argument “newer domain hence nearer frontier” still holds. The fact that we don’t know how to make an AGI doesn’t bear on how much you need to learn to match an expert.
To be clear, this is not the scenario that I worry about, and neither is it the scenario most other people I talk about AI Alignment tend to worry about. I recognize there is disagreement within the AI Alignment community here, but this sentence sounds like it’s some kind of consensus, when I think it clearly isn’t. I don’t expect we will ever see a $300 trillion company before humanity goes extinct.
I’m just using $300 trillion as a proxy for “as big as the world.” The point is that we’re now mostly talking about Google building TAI with relatively large budgets.
It’s not yet settled (since of course none of the bets are settled). But current projects are fairly big, the current trend is to grow quite quickly, and current techniques have massive returns to scale. So the wind certainly seems to be blowing in that direction about as hard as it could.
Well, $300 trillion seems like it assumes that offense is about similarly hard to defense, in this analogy. Russia launching a nuclear attack on the U.S. and this somehow chaining into a nuclear winter that causes civilizational collapse, does not imply that Russia has “grown to $300 trillion”. Similarly, an AI developing a bioweapon that destroys humanity’s ability to coordinate or orient and kills 99% of the population using like $5000, and then rebuild over the course of a few years without humans around, also doesn’t look at all like “explosive growth to $300 trillion”.
This seems important since you are saying that “[this] is just not the same game in any qualitative sense”, whereas I feel like something like the scenario above seems most likely, we haven’t seen much evidence to suggest it’s not what’s going to happen, and sounds quite similar to what Eliezer was talking about at the time. Like, I think de-facto probably an AI won’t do an early strike like this that only kills 99% of the population, and will instead wait for longer to make sure it can do something that has less of a chance of failure, but the point-of-no-return will have been crossed when a system first had the capability to kill approximately everyone.
I agree with this. I agree that it seems likely that model sizes will continue going up, and that cutting-edge performance will probably require at least on the order of $100M in a few years, though it’s not fully clear how much of that money is going to be wasted, and how much a team could reproduce the cutting-edge results without access to the full $100M. I do think in as much as it will come true, this does make me more optimistic that cutting edge capabilities will at least have like 3 years of lead, before a 10 person team could reproduce something for a tenth of the cost (which my guess is probably currently roughly what happened historically?).
Eliezer very specifically talks about AI systems that “go foom,” after which they are so much better at R&D than the rest of the world that they can very rapidly build molecular nanotechnology, and then build more stuff than the rest of the world put together.
This isn’t related to offense vs defense, that’s just >$300 trillion of output conventionally-measured. We’re not talking about random terrorists who find a way to cause harm, we are talking about the entire process of (what we used to call) economic growth now occurring inside a lab in fast motion.
I think he lays this all out pretty explicitly. And for what it’s worth I think that’s the correct implication of the other parts of Eliezer’s view. That is what would happen if you had a broadly human-level AI with nothing of the sort anywhere else. (Though I also agree that maybe there’d be a war or decisive first strike first, it’s a crazy world we’re talking about.)
And I think in many ways that’s quite to what will happen. It just seems most likely to take years instead of months, to use huge amounts of compute (and therefore share proceeds with compute providers and a bunch of the rest of the economy), to result in “AI improvements” that look much more similar to conventional human R&D, and so on.
Good points; those do seem to be cases in which Hanson comes out better. As you say, it comes down to how heavily you weight the stuff Yudkowsky beat Hanson on vs. the stuff Hanson beat Yudkowsky on. I also want to reiterate that I think Yudkowsky is being obnoxious.
(I also agree that the historical bio anchors people did remarkably well & much better than Yudkowsky.)
Note that I feel like, if we look at the overall disagreements in 2008, Eliezer’s view overall seems better than Robin’s. So I think we’re probably on the same page here.
Regarding the Einstein stuff, do you think Einstein’s brain had significantly more compute than a 100 IQ person? I would be very surprised if Einstein had more than, I don’t know, twice the computing power of a normal brain. And so the claim that the distance between an idiot and Einstein is far smaller than that between the idiot and a chimpanzee still seems true to me.
I’d guess that Einstein’s brain probably used something like 10% more compute than the median person.
But to the extent that there is any prediction it’s about how good software will be, and how long it will spend in the range between mediocre and excellent human performance, rather than about how big Einstein’s brain is. And that prediction seems to be faring poorly across domains.
This is a great list and I thank you for describing it. Good examples of one of the claims I’m making—there’s nothing about their debate that tells us much meaningful about Eliezer’s forecasting track record. In fact I would like to link to this comment in the original post because it seems like important supplemental material, for people who are convinced that the debate was one-sided.
I agree about ems being nowhere in sight, versus steady progress in other methods. I also disagree with Hanson about timeframe (I don’t see it taking 300 years). I also agree that general algorithms will be very important, probably more important than Hanson said. I also put a lower probability on a prolonged AI winter than Hanson.
But as you said, AGI still isn’t here. I’d take it a step further—did the Hanson debates even have unambiguous, coherent ideas of what “AGI” refers to?
Of progress toward AGI, “how much” happened since the Hanson debate? This is thoroughly nebulous and gives very little information about a forecasting track record, even though I disagree with Hanson. With the way Eliezer is positioned in this debate, he can just point to any impressive developments, and say that goes in his favor. We have practically no way of objectively evaluating that. If someone already agrees “the event happened”, they update that Eliezer got it right. If they disagree, or if they aren’t sure what the criteria were, they don’t.
Being able to say post-hoc say that Eliezer “looks closer to the truth” is very different from how we measure forecasting performance, and for good reason. If I was judging this, the “prediction” absolutely resolves as “ambiguous”, despite me disagreeing with Hanson on more points in their debate.