For certain kinds of questions (e.g. “I need a new car; what should I get?”), it’s better to ask a bunch of random people than to turn to the internet for advice.
In order to be well-informed, you’ll need to go out and meet people IRL who are connected (at least indirectly) to the thing you want information about.
In the following, I will use the term “my DIT” to refer to the claim that:
In some specific non-trivial contexts, on average more than half of the participants in online debate who pose as distinct human beings are actually bots.
I agree with this version, and I was surprised to see that the Wikipedia definition also includes the bit about it being a deliberate conspiracy, which seems like a strawman, since I have always understood the “Dead Internet Theory” to include only the first part. There’s a lot of stuff on the internet that’s very obviously AI-generated, and so it’s not too far a stretch to suppose that there’s also a lot of synthetic content that hides it better. But this can be explained far more simply than by some vast conspiracy—as SEO, marketing, and astroturfing campaigns.
If Dead Internet Theory is correct, when you see something online, the question you should ask yourself is not “Is this true?” but “Why am I seeing this?” This was always the case to some extent of any algorithmically-curated feed (where the algorithm is anything more complex than “show me all of the posts in reverse chronological order”), but is even more significant when the content itself is algorithmically generated.
If I’m searching online for information about e.g. what new car I should buy, there’s a very strong incentive for all the algorithms involved (both the search engine itself, and the algorithm that spits out the list of recommended car models) to sell their recommendations to the highest bidder and churn out ex post facto justifications explaining why their car is really the best. These algorithms are almost totally uncorrelated with the underlying fact about which car I’d actually want, so I know to consider it of very little value. On the other hand, I would argue, asking a bunch of random acquaintances for car recommendations is much more useful because, although they might not be experts, they were at least not specifically selected in order to deceive me. Even if I ask a friend and they say “Well, I haven’t bought a new car in years, but I heard my coworker’s cousin bought the XYZ and never stops complaining about it”, then this is much more useful information than anything I could find online, because it’s much less likely that my friend’s coworker’s cousin was specifically being paid to say that.
More broadly, on many questions of public concern there may be parties with a strong interest in using bots to create the impression of a broad consensus one way or another. This means that you have no choice but to go out into the real world and ask people, and hope ideally that they’re not simply repeating what they read online, but have some non-AI-mediated connection to the thing.
For certain kinds of questions (e.g. “I need a new car; what should I get?”), it’s better to ask a bunch of random people than to turn to the internet for advice.
In order to be well-informed, you’ll need to go out and meet people IRL who are connected (at least indirectly) to the thing you want information about.
I agree with this version, and I was surprised to see that the Wikipedia definition also includes the bit about it being a deliberate conspiracy, which seems like a strawman, since I have always understood the “Dead Internet Theory” to include only the first part. There’s a lot of stuff on the internet that’s very obviously AI-generated, and so it’s not too far a stretch to suppose that there’s also a lot of synthetic content that hides it better. But this can be explained far more simply than by some vast conspiracy—as SEO, marketing, and astroturfing campaigns.
If Dead Internet Theory is correct, when you see something online, the question you should ask yourself is not “Is this true?” but “Why am I seeing this?” This was always the case to some extent of any algorithmically-curated feed (where the algorithm is anything more complex than “show me all of the posts in reverse chronological order”), but is even more significant when the content itself is algorithmically generated.
If I’m searching online for information about e.g. what new car I should buy, there’s a very strong incentive for all the algorithms involved (both the search engine itself, and the algorithm that spits out the list of recommended car models) to sell their recommendations to the highest bidder and churn out ex post facto justifications explaining why their car is really the best. These algorithms are almost totally uncorrelated with the underlying fact about which car I’d actually want, so I know to consider it of very little value. On the other hand, I would argue, asking a bunch of random acquaintances for car recommendations is much more useful because, although they might not be experts, they were at least not specifically selected in order to deceive me. Even if I ask a friend and they say “Well, I haven’t bought a new car in years, but I heard my coworker’s cousin bought the XYZ and never stops complaining about it”, then this is much more useful information than anything I could find online, because it’s much less likely that my friend’s coworker’s cousin was specifically being paid to say that.
More broadly, on many questions of public concern there may be parties with a strong interest in using bots to create the impression of a broad consensus one way or another. This means that you have no choice but to go out into the real world and ask people, and hope ideally that they’re not simply repeating what they read online, but have some non-AI-mediated connection to the thing.