My guess is Claude wouldn’t find the math error off a “proofread this, here’s its sources copy/pasted” type prompt but you can try.
I was curious about this so decided to check.
Both Claude 3.7 and GPT-4o were able to spot this error when I provided them just the Wikipedia page and instructed them to find any mistakes. They also spotted the arithmetic error when asked to proof-read the cited WSJ article. In all cases, their stated reasoning was that 200 million tons of rabbit meat was way too high, on the order of global meat production, so they didn’t have to actually do any explicit arithmetic.[1]
Funnily enough, the LLMs found two other mistakes in the Rabbit Wikipedia page: the character Peter Warne was listed as Peter Wayne and doxycycline was misspelt as docycycline. So it does seem like, even without access to sources, current LLMs could do a good job at spotting typos and egregious errors in Wikipedia pages.
(caveat: both models also listed a bunch of other “mistakes” which I didn’t check carefully but seemed like LLM hallucinations since the correction contradicted reputable sources)
GPT-4o stumbles slightly when trying to do the arithmetic on the WSJ article. It compares the article’s 420,000 tons with 60 million (200 million x 0.3) rather than the correct calculation of 42 million (200 million x 0.3 x 0.7). However, I gave the same prompt to o1 and it did the maths correctly.
I was curious about this so decided to check.
Both Claude 3.7 and GPT-4o were able to spot this error when I provided them just the Wikipedia page and instructed them to find any mistakes. They also spotted the arithmetic error when asked to proof-read the cited WSJ article. In all cases, their stated reasoning was that 200 million tons of rabbit meat was way too high, on the order of global meat production, so they didn’t have to actually do any explicit arithmetic.[1]
Funnily enough, the LLMs found two other mistakes in the Rabbit Wikipedia page: the character Peter Warne was listed as Peter Wayne and doxycycline was misspelt as docycycline. So it does seem like, even without access to sources, current LLMs could do a good job at spotting typos and egregious errors in Wikipedia pages.
(caveat: both models also listed a bunch of other “mistakes” which I didn’t check carefully but seemed like LLM hallucinations since the correction contradicted reputable sources)
GPT-4o stumbles slightly when trying to do the arithmetic on the WSJ article. It compares the article’s 420,000 tons with 60 million (200 million x 0.3) rather than the correct calculation of 42 million (200 million x 0.3 x 0.7). However, I gave the same prompt to o1 and it did the maths correctly.