Another qualification is that the problems were manually translated into the proof assistant Lean, and only then did the program get to work. But the essential mathematics was done by the program: just the autoformalization part was done by humans.
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
But, in this tweet https://x.com/wtgowers/status/1816509813440082014 he says:
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.
It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.