Gell-Mann checks
tl;dr: “Gell-Mann amnesia” is a cognitive bias—an observation of human failure. The mechanism behind it can be instrumentalized. Call it “Gell-Mann checks”.
Importantly, Gell-Mann checks are only valid insofar as the truth-finding mechanism being judged is good at generalizing.
That is, you can only judge a whole mind by its part if it treats every part the same way.
I used to nod my head at most of what my philosophy teacher said—all seemed coherent. Then he talked about nuclear power, and he just didn’t get it. To avoid Gell-Mann amnesia, I updated to “everything my philosophy teacher says and has said might be bullshit” [1]
Is this fair? My teacher is supposed to be a specialist, after all. I don’t have high priors on a given philosophy teacher grokking nuclear power from first principles. Fine.
But you know who do claim to be generalists? Newspapers.
When The New York Times covers nuclear power, they claim to approach the subject with as much rigor as they do politics. So if they clearly don’t get nuclear power, that’s evidence against them getting politics.
This was Crichton’s initial observation about amnesia: it wasn’t about individuals, who often hide behind the guise of specialization, but newspapers, who are supposed to be generalists through and through. [2]
But surely, my philosophy teacher isn’t an entirely different person when it comes to different subjects![3]
There’s got to be some coherency.
Some interconnected web I can draw evidence of trustworthiness from.
Nuclear power and philosophy are two windows onto the same mind: why wouldn’t evidence correlate?
Well.
Generalizing is hard
If you don’t have even a tentative grasp of the subject at hand, you’re kind of doomed from the beginning.
The NYT could always decide they need a new AI branch, but what then? As a journalist, you’ve spent years perfecting the art of meta-level writing, hopping from subject to subject and relying on “experts” (those with citations and decade-long degrees) for factual accuracy, with no incentive to go deep on a subject yourself and build a gears-level model of the stuff.
Now they put you in charge of the AI branch. You’re tasked with understanding the AI world in a sufficiently detailed and accurate way that literally millions of readers won’t be deceived.
Who do you turn to? Yann Lecun and Geoffrey Hinton both have a lot of citations; how are you supposed to differentiate?
I suspect this is made worse by the fact that as a journalist, you tend to take the political aspect of things first. This seems to be what journalism does—suck in all subjects, no matter how disparate, and shove them into the political realm. And in politics, almost everything operates in highly abstract simulacrum levels.
Which works fine in politics! These days, political success seems to be only loosely tied to object-level issues. [4] That’s not the case in AI. So you’re diving into a completely unknown world with few trusted sources, and on top of that you have to retrain the way your brain thinks (to operate on lower simulacrum levels).
All subjects are not equal
The NYT or Time clearly not getting AI is evidence against their trustworthiness.
But because we can expect this field to be difficult to dip one’s toes into, it isn’t as strong evidence against them as if they clearly didn’t get economics, say.
Most people aren’t actually generalists
Nor strive to be! My philosophy teacher seems fine with the idea that he only understands a fraction of the world.
He’s not the type to try understanding nuclear power from first principles; he was content using it to argue his point, and ditched it later on. Its philosophical side, that’s all he needs to know about the technology!
It’s like he isn’t even trying to be a generalist. No remorse felt for all the fields he can’t learn about.
Meanwhile some of us try to become generalists and fail. Careers by definition restrict the domains we’re comfortable operating in. And so society doesn’t make it easy to build the kind of generalized truth-finding mechanism that would make Gell-Mann checks logically infallible.
(The sequences can are an attempt to fight against that trend.)
Gell-Mann checks are still useful
Regardless of how limited anyone’s understanding of Bayes is, everyone has a confusiometer. If they pick up a new subject and don’t notice their own confusion before delivering their perspective to a class of gullible young minds, that’s evidence they’re not generally great at epistemics.
So if one day they’re knee-deep in Hegel and don’t understand a word, they might be more liable than most to push past their confusion and deliver their lessons with their usual confidence. Noticing my teacher’s inadequacy in physics should update me at least a little in favor of the “yah that’s just BS” hypothesis.
Gell-Mann tests work. But they have limits.
My (empty) ~blog: croissantology.com
- ^
This probably generalizes to all philosophy teachers. Cough.
- ^
The wiki is here.
I’m not ideal for this, but if nobody does it, I’ll create a Wikipedia page for “Gell-Mann amnesia”, because for some reason that doesn’t exist yet
(Use the Low-hanging Fruit, Luke!).
- ^
Though with Elon Musk and engineering vs. politics that seems to be the case.
- ^
They still haven’t repealed The Dread Dredging Act!
If I am reading this entry correctly, I may beg to differ with it. It would be a sign of higher intelligence if someone IS able to be a generalist and collate and interpolate data and meaning across a wide variety of topics, but who also is able to wave the white flag of humility and admit when they are in over their head. If by contrast they are unable to do so, and just tend to forge ahead on the SS Dunning-Kruger to the Land of the Grand Fallacies of Ignorance, then yes I think I am justified to look down on their overall intellectual abilities. That they may be competent in their one narrow area of expertise may simply indicate that they have successfully absorbed the surface tenets therein (as put forth by people wiser and more knowledgeable) but without necessarily pondering them more deeply. That by itself doesn’t impress me (much).
[Newspapers are a somewhat inapt entity to focus such analysis on since we are by definition dealing with many individual minds and not just one, but even there implicit editorial and institutional biases may be at work. I’d rather focus on individuals tho.]
My fave example from my own sphere is the baseball analyst Bill James. I occasionally dropped into his own personal web site, and had always looked up to him as an impartial analyst who would bend over backward to be objective.
Imagine my total shock to see him casually dismiss some political positions of people he came across on another site as “social justice warriors”, a right-wing dog whistle term of course, which indicated that he at least has a lot of sympathy with MAGA views. I was thus instantly confused, since I’d figure a mind such as his would approach politics with the same exact objectivity and rigor that he did baseball, and thus grasping the subtleties of how out society and political systems operate that he would be the LAST pundit to use such loaded and biased language. I was wrong, obviously.
Note I had however by that point already noted some blind spots in his baseball analysis, and in fact had picked up on a subtle but telling little throwaway note in his The New Bill James Historical Baseball Abstract [from 2001], about my favorite pitcher of all time in point of fact, Greg Maddux. He had earlier outlined situations where a player misses playing time but still deserves credit, such as wartime seasons. However I noted that he DIDN’T include time missed due to player strikes, something rather glaring by omission.
Maddux’s two best seasons of course were the strike-shortened years 1994 and 1995; give him credit for the missed starts there and his peak in those seasons becomes one of the best ever.
He dismissed those concerns with a casual throwaway line, calling it “crying over spilled milk.” I was rather puzzled by such a lapse in his objectivity for years afterward, until I saw the SJW note, and realized that, being an extreme right-winger, he must have been HIGHLY biased against labor rights and thus the rights of workers to strike for better conditions, and allowed this bias to affect his baseball analysis. When I put 2 and 2 together I was yes rather appalled. His books lie in a box somewhere (after a couple of moves in the last 6 years) and I haven’t felt much of an urge to consult them since then, since noting these blind spots made me wonder where else his objectivity has lapsed. [In any event his work, while certainly groundbreaking, has been subsequently surpassed by others in the field.]
IOW if someone has a lapse or blind spot outside their field of expertise, it likely will feed back into their core profession or area of putative expertise as well, and you will likely spot such lapses if you dig deep enough, and in any event I think I am more than justified to judge anything they say (in or out) at least a bit more harshly, because such a lapse isn’t likely to just be a one-time thing but indicative of deeper deficiencies that will likely cut across a wide swath of knowledge in the case of the individual in question.
I was unimpressed by his statistics regardless, and his response to criticisms (where he won’t even say what those were) left me even less impressed, so I don’t think one needs to look at his political terminology to criticize Bill James.
I believe that this is probably true: they are about equally accurate in both cases, unfortunately. I’ve done enough interviews with reporters on technical subjects to know that what’s said and what’s heard are mostly unrelated, even when I come up with and they pick good quotes to use.
Well then, I can update a little more in the direction not to trust this stuff.
To be fair, there are exceptions, and some reporters or publications consistently do better at oarticular kinds of reporting. It just takea a lot of work to reliably figure out which are which.
Can I piggy-back off your conclusions so far? Any news you find okay?
The first few that came to mind have, it turns out, already retired since I last talked to them.
The next few are basically all bloggers with a tighter focus that I learned about either here on LW or through recommendations that ultimately chain back to SSC/ASX.
There are a lot of good sources of data in the world, and very few good sources of analysis, and those that exist have very little relationship to popularity or price or prestige.
Beyond that, it really is “buyer beware,” and learning to know your own limited and improve your own speed at sorting the nonsense and spotting the bad assumptions and wrong inferences. That’s why I’m not being more specific—without knowing your own habits of thought, it’s hard to guess whose habitual mistakes and quirks will be transparent to you, and which will mislead you or just not paint the intended picture for you.
Probably my advice is, if something you read seems worth understanding, try to spot and discount (what Zvi calls) the Obvious Nonsense. Back in 2008 my aunt sent me a NYT article that (seemingly) claimed having granite countertops was like smoking a pack of cigarettes a day. Obvious nonsense. It was basically pretending the cancer risk from cigarettes was radiological and not chemical, but unless you knew enough about biology and physics it was easy to miss. I was primed for that, because I was a physics undergrad and because I’d recently been reading about how coal plants release more radiation than nuclear plants even if you ignore all other chemical pollution. Eventually this kind of thinking became habitual for me, and it became easier to learn useful things from inadequate sources.
It’s also been educational to go through a few healthcare situations where you need to find doctors that are Actually Good and not just cargo-cult-style going through the motions. There’s a vibe, a style, that just shines through regardless of specific topic, related to curiosity and excitement about something new and different.