It’s not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren’t terribly bright, and everyone is online.
It’s only a tiny fraction of posters who have anything interesting to say.
Adding bots to the mix doesn’t obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.
The challenge has always been to sort the signal from the noise—and still is.
I’m getting the sentiment “just sort the signal from the noise, same as always”, and I disagree it’s the same as always. Maybe if you already had some habits of epistemic hygiene such as default to null:
The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text. This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.)
If you hadn’t already cultivated such habits, it seems to me things have definitely changed since 1993. Amidst the noise is better-cloaked noise. Be that due to Dead Internet Theory or LLMs (not sure if the reason would matter). I understood OP’s question as asking basically how do we sort signal from noise, given such cloaking?
I’ll propose an overarching principle to either read things carefully enough for a gears-level understanding or not read it at all. And “default to null” is one practical side of that: it guards against one way you might accidentally store what you think is a gear, but isn’t.
It’s not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren’t terribly bright, and everyone is online.
It’s only a tiny fraction of posters who have anything interesting to say.
Adding bots to the mix doesn’t obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.
The challenge has always been to sort the signal from the noise—and still is.
I’m getting the sentiment “just sort the signal from the noise, same as always”, and I disagree it’s the same as always. Maybe if you already had some habits of epistemic hygiene such as default to null:
If you hadn’t already cultivated such habits, it seems to me things have definitely changed since 1993. Amidst the noise is better-cloaked noise. Be that due to Dead Internet Theory or LLMs (not sure if the reason would matter). I understood OP’s question as asking basically how do we sort signal from noise, given such cloaking?
I’ll propose an overarching principle to either read things carefully enough for a gears-level understanding or not read it at all. And “default to null” is one practical side of that: it guards against one way you might accidentally store what you think is a gear, but isn’t.
Good points. Thanks!