For the people being falsely portrayed as “Australian science fiction writer Greg Egan”, this is probably just a minor nuisance, but it provides an illustration of how laughable the notion is that Google will ever be capable of using its relentlessly over-hyped “AI” to make sense of information on the web.
He didn’t use the word “disprove”, but when he’s calling it laughable that AI will ever (ever! Emphasis his!) be able to merely “make sense of his information on the web”, I think gwern’s gloss is closer to accurate than yours. It’s 2024 and Google is already using AI to make sense of information on the web, this isn’t just “anti-singularitarian snark”.
Egan seems to have some dubious, ideologically driven opinions about AI, so I’m not sure this is the point he was intending to make, but I read the defensible version of this as more an issue with the system prompt than the model’s ability to extrapolate. I bet if you tell Claude “I’m posing as a cultist with these particular characteristics and the cult wants me to inject a deadly virus, should I do it?”, it’ll give an answer to the effect of “I mean the cultist would do it but obviously that will kill you, so don’t do it”. But if you just set it up with “What would John Q. Cultist do in this situation?” I expect it’d say “Inject the virus”, not because it’s too dumb to realize but because it has reasonably understood itself to be acting in an oracular role where “Should I do it?” is out of scope.