Is Clickbait Destroying Our General Intelligence?
(Cross-posted from Facebook.)
Now and then people have asked me if I think that other people should also avoid high school or college if they want to develop new ideas. This always felt to me like a wrong way to look at the question, but I didn’t know a right one.
Recently I thought of a scary new viewpoint on that subject.
This started with a conversation with Arthur where he mentioned an idea by Yoshua Bengio about the software for general intelligence having been developed memetically. I remarked that I didn’t think duplicating this culturally transmitted software would be a significant part of the problem for AGI development. (Roughly: low-fidelity software tends to be algorithmically shallow. Further discussion moved to comment below.)
But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an important gear even if it’s an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine’s mass can reduce the machine’s performance by way more than 2%. Feral children would be the case in point.
A scary question is whether it’s possible to do subtler damage to the culturally transmitted software of general intelligence.
I’ve had the sense before that the Internet is turning our society stupider and meaner. My primary hypothesis is “The Internet is selecting harder on a larger population of ideas, and sanity falls off the selective frontier once you select hard enough.”
To review, there’s a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that’s of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside.
On my feed yesterday was an essay complaining about how the intense competition to get into Harvard is producing a monoculture of students who’ve lined up every single standard accomplishment and how these students don’t know anything else they want to do with their lives. Gentle, soft competition on a few accomplishments might select genuinely stronger students; hypercompetition for the appearance of strength produces weakness, or just emptiness.
A hypothesis I find plausible is that the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences. The Internet is making it possible for ideas that are optimized to appeal hedonically-virally within a filter bubble to outcompete ideas that have been even slightly optimized for anything else. We’re looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there’s no room for that. We’re looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there’s little incentive to show people how to be fair in their judgment of ideas for other bubbles, it’s not the most appealing Tumblr content. Print magazines in the 1950s were hardly perfect, but they could get away with sometimes presenting complicated issues as complicated, because there weren’t a hundred blogs saying otherwise and stealing their clicks. Or at least, that’s the hypothesis.
It seems plausible to me that basic software for intelligent functioning is being damaged by this hypercompetition. Especially in a social context, but maybe even outside it; that kind of thing tends to slop over. When someone politely presents themselves with a careful argument, does your cultural software tell you that you’re supposed to listen and make a careful response, or make fun of the other person and then laugh about how they’re upset? What about when your own brain tries to generate a careful argument? Does your cultural milieu give you any examples of people showing how to really care deeply about something (i.e. debate consequences of paths and hew hard to the best one), or is everything you see just people competing to be loud in their identification? The Occupy movement not having any demands or agenda could represent mild damage to a gear of human general intelligence that was culturally transmitted and that enabled processing of a certain kind of goal-directed behavior. And I’m not sure to what extent that is merely a metaphor, versus it being simple fact if we could look at the true software laid out. If you look at how some bubbles are talking and thinking now, “intellectually feral children” doesn’t seem like entirely inappropriate language.
Shortly after that conversation with Arthur, it occurred to me that I was pretty much raised and socialized by my parents’ collection of science fiction.
My parents’ collection of old science fiction.
Isaac Asimov. H. Beam Piper. A. E. van Vogt. Early Heinlein, because my parents didn’t want me reading the later books.
And when I did try reading science fiction from later days, a lot of it struck me as… icky. Neuromancer, bleah, what is wrong with this book, it feels damaged, why do people like this, it feels like there’s way too much flash and it ate the substance, it’s showing off way too hard.
And now that I think about it, I feel like a lot of my writing on rationality would be a lot more popular if I could go back in time to the 1960s and present it there. “Twelve Virtues of Rationality” is what people could’ve been reading instead of Heinlein’s Stranger in a Strange Land, to take a different path from the branching point that found Stranger in a Strange Land appealing.
I didn’t stick to merely the culture I was raised in, because that wasn’t what that culture said to do. The characters I read didn’t keep to the way they were raised. They were constantly being challenged with new ideas and often modified or partially rejected those ideas in the course of absorbing them. If you were immersed in an alien civilization that had some good ideas, you were supposed to consider it open-mindedly and then steal only the good parts. Which… kind of sounds axiomatic to me? You could make a case that this is an obvious guideline for how to do generic optimization. It’s just what you do to process an input. And yet “when you encounter a different way of thinking, judge it open-mindedly and then steal only the good parts” is directly contradicted by some modern software that seems to be memetically hypercompetitive. It probably sounds a bit alien or weird to some people reading this, at least as something that you’d say out loud. Software contributing to generic optimization has been damaged.
Later the Internet came along and exposed me to some modern developments, some of which are indeed improvements. But only after I had a cognitive and ethical foundation that could judge which changes were progress versus damage. More importantly, a cognitive foundation that had the idea of even trying to do that. Tversky and Kahneman didn’t exist in the 1950s, but when I was exposed to this new cognitive biases literature, I reacted like an Isaac Asimov character trying to integrate it into their existing ideas about psychohistory, instead of a William Gibson character wondering how it would look on a black and chrome T-Shirt. If that reference still means anything to anyone.
I suspect some culturally transmitted parts of the general intelligence software got damaged by radio, television, and the Internet, with a key causal step being an increased hypercompetition of ideas compared to earlier years. I suspect this independently of any other hypotheses about my origin story. It feels to me like the historical case for this thesis ought to be visible by mere observation to anyone who watched the quality of online discussion degrade from 2002 to 2017.
But if you consider me to be more than usually intellectually productive for an average Ashkenazic genius in the modern generation, then in this connection it’s an interesting and scary further observation that I was initially socialized by books written before the Great Stagnation. Or by books written by authors from only a single generation later, who read a lot of old books themselves and didn’t watch much television.
That hypothesis doesn’t feel wrong to me the way that “oh you just need to not go to college” feels wrong to me.
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- Coronavirus as a test-run for X-risks by 13 Jun 2020 21:00 UTC; 71 points) (
- Have Attention Spans Been Declining? by 8 Sep 2023 14:11 UTC; 68 points) (
- A History Of Universalist Greed by 4 Jun 2020 0:22 UTC; 56 points) (
- Reviewing the Review by 26 Feb 2020 2:51 UTC; 45 points) (
- Why you are psychologically screwed up by 18 Feb 2022 17:07 UTC; 44 points) (
- “Natural is better” is a valuable heuristic by 20 Jun 2023 22:25 UTC; 35 points) (
- Poll: Which variables are most strategically relevant? by 22 Jan 2021 17:17 UTC; 32 points) (
- Clickbait might not be destroying our general Intelligence by 19 Nov 2018 0:13 UTC; 25 points) (
- AGI Alignment Should Solve Corporate Alignment by 27 Dec 2020 2:23 UTC; 20 points) (
- 22 Dec 2023 15:06 UTC; 9 points) 's comment on Review: Amusing Ourselves to Death by (
- 26 Mar 2021 18:00 UTC; 9 points) 's comment on Eric Raymond’s Shortform by (
- 4 Dec 2020 10:48 UTC; 6 points) 's comment on In Addition to Ragebait and Doomscrolling by (
- 18 Dec 2021 10:36 UTC; 5 points) 's comment on Persuasion Tools: AI takeover without AGI or agency? by (
- 20 Jun 2020 12:42 UTC; 5 points) 's comment on Relevant pre-AGI possibilities by (
- 19 Jun 2020 20:55 UTC; 3 points) 's comment on Covid-19 6/18: The Virus Goes South by (
- 3 Nov 2019 19:52 UTC; 3 points) 's comment on Daniel Kokotajlo’s Shortform by (
- Uncursing Civilization by 1 Jul 2024 18:44 UTC; -6 points) (
I kind of have conflicting feelings about this post, but still think it should at least be nominated for the 2018 review.
I think the point about memetically transmitted ideas only really being able to perform a shallow, though maybe still crucial, part of cognition is pretty important and might deserve this to be nominated alone.
But the overall point about clickbait and the internet feels also really important to me, but I also feel really conflicted because it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives. I do think the Goodhart’s law points are pretty clear, but I really wish we could do some more systematic study of whether the things that Eliezer is pointing to are real.
So overall, I think I really want this to be reviewed, at least so that we can maybe collectively put some effort into finding more empirical sources of Eliezer’s claims in this post, and see whether they hold up. If they do, then I do think that that is of quite significant importance.
Initially, I did not nominate this post, for reasons similar to Habryka’s note that “it kind of pattern-matches to a narrative that I feel performs badly on some reference-class forecasting perspectives”.
But, upon reflection: the hypotheses here do feel “important if true”, and moreover the model seems plausible. And, regardless, “What exactly is modern internet culture doing to us?” seems like a really important question, which I’d like to have seriously investigated. It seems like exactly the sort of thing rationality is for: a high stakes question with limited information, potentially with only a limited window to get the answer right.
So, this nomination is not (necessarily) because I think this should be included in the Best of 2018 book, but because I want the claims to get more thorough review/operationalization/thinking-about-what-future-work-is-helpful. (Meanwhile, I’ve definitely thought a lot about in in the past year)
...
(Addenda: I also think this might have been the post that crystallized the idea of “hypercompetition can produce worse results” for me, including domains like college admissions and hiring. I think I’ve gotten that from a few different places but I noticed that point on the re-read here and it’s definitely a hypothesis I consider more often now)
Re your addendum, to make an almost-obvious point, over-optimizing producing worse results is what large parts of modern life are all about; typically over-optimizing on evolved behaviours. Fat/sugar, porn, watching TV (as a substitute for real life), gambling (risk-taking to seek reward), consumerism and indeed excess money-seeking (accumulating unnecessary resources), etc. The bad results often take the form of addictions.
Though some such things are arguably harmless (e.g. professional sport—building unnecessary muscles/abilities full-time to win a pointless status contest).