A Cautionary Tale about Trusting Numbers from Wikipedia
So this morning I woke up early and thought to myself: “You know what I haven’t done in a while? Good old fashioned Wikipedia rabbit hole.” So I started reading the article on rabbits. Things were relatively sane until I got to the section on rabbits as food.
Wild leporids comprise a small portion of global rabbit-meat consumption. Domesticated descendants of the European rabbit (Oryctolagus cuniculus) that are bred and kept as livestock (a practice called cuniculture) account for the estimated 200 million tons of rabbit meat produced annually.[161] Approximately 1.2 billion rabbits are slaughtered each year for meat worldwide.[162]
Something has gone very wrong here!
200 million tons is 400 billion pounds (add ~10% if they’re metric tons, but we can ignore that.)
Divide that by 1.2 billion, and we can deduce that those rabbits weigh in at over 300 pounds each on average! Now I know we’ve bred some large animals for livestock, but I’m rolling to disbelieve when it comes to three hundred pound bunnies.
Of the two sources Wikipedia cites it looks like [161] is the less reliable looking one. It’s a WSJ blog. But the biggest reason we shouldn’t be trusting that article is that its numbers aren’t even internally consistent!
From the article:
Globally, about some 200 million tons of rabbit meat are produced a year, says Luo Dong, director of the Chinese Rabbit Industry Association. China consumes about 30% of the whole production, he said, with 70% of such meat—or some 420,000 tons a year—going to Sichuan province as well as the neighboring municipality of Chongqing.
There’s a basic arithmetic error here! `200 million * 70% * 30%` is 42 *million*, not 420,000.
If we assume this 200 million ton number was wrong and the 420,000 ton number for Sichuan was right, the global number should in fact be 2 million tons. This would make the rabbits weigh three pounds each on average, which is a much more reasonable weight for a rabbit!
If I had to take a guess as to how this mistake happened, putting on my linguist hat, Chinese has a single word for ten thousand, like the Greek-derived “myriad”, (spelt either 万 or 萬). If you actually wanted to say 2*10^6 in Chinese, it would end up as something like “two hundred myriad”. So I can see a fairly plausible way a translator could mess up and render it as “200 million”.
Anyway, I’ve posted this essay to the talk page and submitted an edit request. We’ll see how long it takes Wikipedia to fix this.
That seems like a good example of a clear math error.
I’m kind of surprised that LLMs aren’t catching things like that yet. I’m curious how far along such efforts are—it seems like an obvious thing to target.
By “aren’t catching” do you mean “can’t” or do you mean “wikipedia company/editors haven’t deployed an LLM to crawl wikipedia, read sources and edit the article for errors”?
The 161 is paywall so I can’t really test. My guess is Claude wouldn’t find the math error off a “proofread this, here’s its sources copy/pasted” type prompt but you can try.
My guess is Claude wouldn’t find the math error off a “proofread this, here’s its sources copy/pasted” type prompt but you can try.
I was curious about this so decided to check.
Both Claude 3.7 and GPT-4o were able to spot this error when I provided them just the Wikipedia page and instructed them to find any mistakes. They also spotted the arithmetic error when asked to proof-read the cited WSJ article. In all cases, their stated reasoning was that 200 million tons of rabbit meat was way too high, on the order of global meat production, so they didn’t have to actually do any explicit arithmetic.[1]
Funnily enough, the LLMs found two other mistakes in the Rabbit Wikipedia page: the character Peter Warne was listed as Peter Wayne and doxycycline was misspelt as docycycline. So it does seem like, even without access to sources, current LLMs could do a good job at spotting typos and egregious errors in Wikipedia pages.
(caveat: both models also listed a bunch of other “mistakes” which I didn’t check carefully but seemed like LLM hallucinations since the correction contradicted reputable sources)
GPT-4o stumbles slightly when trying to do the arithmetic on the WSJ article. It compares the article’s 420,000 tons with 60 million (200 million x 0.3) rather than the correct calculation of 42 million (200 million x 0.3 x 0.7). However, I gave the same prompt to o1 and it did the maths correctly.
Neat. You can try to ask it for confidence interval and it’ll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can’t really guess back-of-envelope for if it’s cost effective to run this over all of wiki per-article.
Also I kind of just want this on reddit and stuff. I’m more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.
By “aren’t catching” do you mean “can’t” or do you mean “wikipedia company/editors haven’t deployed an LLM to crawl wikipedia, read sources and edit the article for errors”?
Yep.
My guess is that this would take some substantial prompt engineering, and potentially a fair bit of money.
I imagine they’ll get to it eventually (as it becomes easier + cheaper), but it might be a while.
A Cautionary Tale about Trusting Numbers from Wikipedia
So this morning I woke up early and thought to myself: “You know what I haven’t done in a while? Good old fashioned Wikipedia rabbit hole.” So I started reading the article on rabbits. Things were relatively sane until I got to the section on rabbits as food.
Something has gone very wrong here!
200 million tons is 400 billion pounds (add ~10% if they’re metric tons, but we can ignore that.)
Divide that by 1.2 billion, and we can deduce that those rabbits weigh in at over 300 pounds each on average! Now I know we’ve bred some large animals for livestock, but I’m rolling to disbelieve when it comes to three hundred pound bunnies.
Of the two sources Wikipedia cites it looks like [161] is the less reliable looking one. It’s a WSJ blog. But the biggest reason we shouldn’t be trusting that article is that its numbers aren’t even internally consistent!
From the article:
There’s a basic arithmetic error here! `200 million * 70% * 30%` is 42 *million*, not 420,000.
If we assume this 200 million ton number was wrong and the 420,000 ton number for Sichuan was right, the global number should in fact be 2 million tons. This would make the rabbits weigh three pounds each on average, which is a much more reasonable weight for a rabbit!
If I had to take a guess as to how this mistake happened, putting on my linguist hat, Chinese has a single word for ten thousand, like the Greek-derived “myriad”, (spelt either 万 or 萬). If you actually wanted to say 2*10^6 in Chinese, it would end up as something like “two hundred myriad”. So I can see a fairly plausible way a translator could mess up and render it as “200 million”.
Anyway, I’ve posted this essay to the talk page and submitted an edit request. We’ll see how long it takes Wikipedia to fix this.
Links:
Original article: https://en.wikipedia.org/wiki/Rabbit#As_food_and_clothing
[161] https://web.archive.org/web/20170714001053/https://blogs.wsj.com/chinarealtime/2014/06/1 3/french-rabbit-heads-the-newest-delicacy-in-chinese-cuisine/
That seems like a good example of a clear math error.
I’m kind of surprised that LLMs aren’t catching things like that yet. I’m curious how far along such efforts are—it seems like an obvious thing to target.
They are
By “aren’t catching” do you mean “can’t” or do you mean “wikipedia company/editors haven’t deployed an LLM to crawl wikipedia, read sources and edit the article for errors”?
The 161 is paywall so I can’t really test. My guess is Claude wouldn’t find the math error off a “proofread this, here’s its sources copy/pasted” type prompt but you can try.
I was curious about this so decided to check.
Both Claude 3.7 and GPT-4o were able to spot this error when I provided them just the Wikipedia page and instructed them to find any mistakes. They also spotted the arithmetic error when asked to proof-read the cited WSJ article. In all cases, their stated reasoning was that 200 million tons of rabbit meat was way too high, on the order of global meat production, so they didn’t have to actually do any explicit arithmetic.[1]
Funnily enough, the LLMs found two other mistakes in the Rabbit Wikipedia page: the character Peter Warne was listed as Peter Wayne and doxycycline was misspelt as docycycline. So it does seem like, even without access to sources, current LLMs could do a good job at spotting typos and egregious errors in Wikipedia pages.
(caveat: both models also listed a bunch of other “mistakes” which I didn’t check carefully but seemed like LLM hallucinations since the correction contradicted reputable sources)
GPT-4o stumbles slightly when trying to do the arithmetic on the WSJ article. It compares the article’s 420,000 tons with 60 million (200 million x 0.3) rather than the correct calculation of 42 million (200 million x 0.3 x 0.7). However, I gave the same prompt to o1 and it did the maths correctly.
Neat. You can try to ask it for confidence interval and it’ll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can’t really guess back-of-envelope for if it’s cost effective to run this over all of wiki per-article.
Also I kind of just want this on reddit and stuff. I’m more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.
Yep.
My guess is that this would take some substantial prompt engineering, and potentially a fair bit of money.
I imagine they’ll get to it eventually (as it becomes easier + cheaper), but it might be a while.