It wasn’t told what to prove. To get round that difficulty, it generated several hundred guesses (many of which were equivalent to each other). Then it ruled out lots of them by finding simple counterexamples, before ending up with a small shortlist that it then worked on.
Another qualification is that the problems were manually translated into the proof assistant Lean, and only then did the program get to work. But the essential mathematics was done by the program: just the autoformalization part was done by humans.
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
https://x.com/wtgowers/status/1816839783034843630
That comment doesn’t seem to be correct.
But, in this tweet https://x.com/wtgowers/status/1816509813440082014 he says:
So maybe e.g. the (not very auto-) autoformalization part produced a theorem-statement template with some sort of placeholder where the relevant constant value goes, and AlphaProof knew it needed to find a suitable value to put in the gap.
This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.
It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.