In early 2022 Paul Christiano and Eliezer Yudkowsky publicly stated a bet: Paul thought an IMO gold medal was 8% by 2025, and Eliezer >16%. Paul said “If Eliezer wins, he gets 1 bit of epistemic credit.” I’m not sure how to do the exact calculation, but based on the current market price of 69% Eliezer is probably expected to win over half a bit of epistemic credit.
If the news holds up we should update partway to the takeaways Paul virtuously laid out in the post, to the extent we haven’t already.
How I’d update
The informative:
I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that.
I think this would be significant evidence that takeoff will be limited by sociological facts and engineering effort rather than a slow march of smooth ML scaling. Maybe I’d move from a 30% chance of hard takeoff to a 50% chance of hard takeoff.
If Eliezer wins, he gets 1 bit of epistemic credit.[2][3] These kinds of updates are slow going, and it would be better if we had a bigger portfolio of bets, but I’ll take what we can get.
This would be some update for Eliezer’s view that “the future is hard to predict.” I think we have clear enough pictures of the future that we have the right to be surprised by an IMO challenge win; if I’m wrong about that then it’s general evidence my error bars are too narrow.
[Edit: this comment is probably retracted, although I’m still confused; see discussion below.]
I’d like clarification from Paul and Eliezer on how the bet would resolve, if it were about whether an AI could get IMO silver by 2024.
Besides not fitting in the time constraints (which I think is kind of a cop-out because the process seems pretty parallelizable), I think the main reason that such a bet would resolve no is that problems 1, 2, and 6 had the form “find the right answer and prove it right”, whereas the DeepMind AI was given the right answer and merely had to prove it right. Often, finding the right answer is a decent part of the challenge of solving an Olympiad problem. Quoting more extensively from Manifold commenter Balasar:
The “translations” to Lean do some pretty substantial work on behalf of the model. For example, in the theorem for problem 6, the Lean translation that the model is asked to prove includes an answer that was not given in the original IMO problem.
theorem imo_2024_p6 (IsAquaesulian : (ℚ → ℚ) → Prop) (IsAquaesulian_def : ∀ f, IsAquaesulian f ↔ ∀ x y, f (x + f y) = f x + y ∨ f (f x + y) = x + f y) : IsLeast {(c : ℤ) | ∀ f, IsAquaesulian f → {(f r + f (-r)) | (r : ℚ)}.Finite ∧ {(f r + f (-r)) | (r : ℚ)}.ncard ≤ c} 2
The model is supposed to prove that “there exists an integer c such that for any aquaesulian function f there are at most c different rational numbers of the form f(r)+f(−r) for some rational number r, and find the smallest possible value of c".
The original IMO problem does not include that the smallest possible value of c is 2, but the theorem that AlphaProof was given to solve has the number 2 right there in the theorem statement. Part of the problem is to figure out what 2 is.
The system automatically generates candidate theorems it might try to prove, expressing possible answers, and attempts to prove them.
In this case, the version of the theorem it ended up being able to prove was the one with 2 in that position. (Which is just as well, since—I assume, not having actually tried to solve the problem for myself—that is in fact the unique number for which such a theorem is true.)
So the thing you end up getting a proof of includes the answer, but not because the system was told the answer in advance.
It would be nice to have this more explicitly from the AlphaProof people, though.
[EDITED to add:] Actually, as per the tweet from W T Gowers quoted by “O O” elsewhere in this thread, we do have it explicitly, not from the AlphaProof people but from one of the mathematicians the AlphaProof people engaged to evaluate their solutions.
It wasn’t told what to prove. To get round that difficulty, it generated several hundred guesses (many of which were equivalent to each other). Then it ruled out lots of them by finding simple counterexamples, before ending up with a small shortlist that it then worked on.
Another qualification is that the problems were manually translated into the proof assistant Lean, and only then did the program get to work. But the essential mathematics was done by the program: just the autoformalization part was done by humans.
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
I think it would still be correct to it resolve no for time reasons, even if it feels like it could be parallelisable (if it were parallelised, I would feel differently).
To me, this was to be expected as a straightforward application of AlphaZero-like self-play amplification and destillation. The missing piece was the analogous policy network, which was a convolutional neural network for the AlphaZero board games. Once it became quite clear that existing LLMs were capable of being smart enough to generate good heuristics to this (with enough data), it seemed quite obvious to me that self-play guided by an LLM-policy heuristic would work.
In early 2022 Paul Christiano and Eliezer Yudkowsky publicly stated a bet: Paul thought an IMO gold medal was 8% by 2025, and Eliezer >16%. Paul said “If Eliezer wins, he gets 1 bit of epistemic credit.” I’m not sure how to do the exact calculation, but based on the current market price of 69% Eliezer is probably expected to win over half a bit of epistemic credit.
If the news holds up we should update partway to the takeaways Paul virtuously laid out in the post, to the extent we haven’t already.
[Edit: this comment is probably retracted, although I’m still confused; see discussion below.]
I’d like clarification from Paul and Eliezer on how the bet would resolve, if it were about whether an AI could get IMO silver by 2024.
Besides not fitting in the time constraints (which I think is kind of a cop-out because the process seems pretty parallelizable), I think the main reason that such a bet would resolve no is that problems 1, 2, and 6 had the form “find the right answer and prove it right”, whereas the DeepMind AI was given the right answer and merely had to prove it right. Often, finding the right answer is a decent part of the challenge of solving an Olympiad problem. Quoting more extensively from Manifold commenter Balasar:
I’m pretty sure what’s going on is:
The system automatically generates candidate theorems it might try to prove, expressing possible answers, and attempts to prove them.
In this case, the version of the theorem it ended up being able to prove was the one with 2 in that position. (Which is just as well, since—I assume, not having actually tried to solve the problem for myself—that is in fact the unique number for which such a theorem is true.)
So the thing you end up getting a proof of includes the answer, but not because the system was told the answer in advance.
It would be nice to have this more explicitly from the AlphaProof people, though.
[EDITED to add:] Actually, as per the tweet from W T Gowers quoted by “O O” elsewhere in this thread, we do have it explicitly, not from the AlphaProof people but from one of the mathematicians the AlphaProof people engaged to evaluate their solutions.
https://x.com/wtgowers/status/1816839783034843630
That comment doesn’t seem to be correct.
But, in this tweet https://x.com/wtgowers/status/1816509813440082014 he says:
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.
It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.
I think it would still be correct to it resolve no for time reasons, even if it feels like it could be parallelisable (if it were parallelised, I would feel differently).
Early 2023 I bet $500 on AI winning the IMO gold medal by 2026. This was a 1:1 bet against Michael Vassar, meaning I attributed >50% to this. It now seems very likely that I’m going to win.
To me, this was to be expected as a straightforward application of AlphaZero-like self-play amplification and destillation. The missing piece was the analogous policy network, which was a convolutional neural network for the AlphaZero board games. Once it became quite clear that existing LLMs were capable of being smart enough to generate good heuristics to this (with enough data), it seemed quite obvious to me that self-play guided by an LLM-policy heuristic would work.