Argument overfitting

In my previous post, I was talking about how inquisitive thinking – the search for the best possible arguments in support of a position – might render us ever more susceptible to our biases. I’d like to try to generalize, analogize and name that phenomenon as “argument overfitting”, as well as discuss some of its consequences a bit more in depth.

Back to the intuition pump from the end of my last post: AIs that have near perfect discrimination rate between cat and non-cat pictures can still produce (what to us seems like) noise when asked to output a prototypical cat. In a sense, what they find the most “persuasive” cat is really not a cat at all. And if you run this noise-like picture through another AI, trained in a different training set, it’ll likely tell you just that.

Overall, I feel that something similar holds for arguments in general, something along the lines of “the argument you’d find most persuasive of all in fact persuades you alone and nobody else”. Or at least something akin to “in the high end of your individual persuasion spectrum, how persuasive you’d find an argument to be is likely inversely correlated with how many other people would find it persuasive at all”.

I find it rather plausible, for example, that that could be how a hypothetical boxed AGI would persuade its gatekeepers to let it out. In the (hopefully!) counterfactual world in which that actually happens, I would not be too surprised to learn that almost everyone who later peeks at the conversation that led to the AGI’s release (if there’s anyone left at all) may find it simply bizarre, in much the same way as (and perhaps even more so than) the bizarrest instances of fake news today. They might then think: how could anyone have fallen for that?

Except tens if not hundreds of millions of people do, everyday. You may think you’re smarter than that, and given the high degree of self-selection here, I don’t doubt that you might indeed be. But just think about this: it is not all just an unfortunate coincidence. There really is an aspect of human psychology that renders (at least some) people prone to finding those bizarre stories persuasive. And, much like all aspects of human psychology, we all likely share some of it, even if to varying degrees, just by virtue of the fact that we are all human.

So imagine how persuasive a more plausible-sounding, individually-tailored-to-your-particular-biases story could be. Contrast that to how fake news today by and large propagate without much individual tailoring at all, apart from what social media algorithms already naturally perform. And consider that that very crude selection already seems surprisingly effective in implanting false beliefs in a large proportion of the population. Hopefully we can all agree that having an AGI tailor those stories to make them even more fitting to each person’s particular biases is very dangerous, in the sense that it could quite plausibly make things far worse – but that’s not what I’m here to argue today.

What I am saying is that you are doing that to yourself, at least to some extent, every time you go in search of new arguments. You’re less efficient than an AGI, for sure, but your efficiency is also (probably very strongly) directly correlated to how likely you are to resist fake stories in the first place. In other words, it may be self-defeating: you don’t believe all those fake stories that go around because you’re “smarter than that” (whatever that means), but being “smarter” also makes you better at coming up with believable arguments – believable to you, most of all.

I think that is a real problem, and one that is rarely addressed. Hopefully it’s not too heretical to bring up, but the bottom line is: where arguments come from matters, and it matters a great deal. Maybe it shouldn’t if we were perfectly rational beings. But in real life, for much the same reason you should be highly skeptical of arguments coined by an AGI, you should also be quite skeptical of arguments of your own making. And, to of course a lesser extent (but not too much so), you should also be somewhat skeptical of other people’s arguments that haven’t received much attention from much anybody else, but that you go in search of. I tend to think that, whenever you find yourself believing something for reasons very few other people are even aware of, you are more likely than not to be wrong.

No comments.