The reason is that checking things yourself is a really, really basic, essential standard of discourse[1]. Errors propagate, and the only way to avoid them propagating is not to propagate them.
If this was created using some standard LLM UI, it would have come with some boilerplate “don’t use this without checking it” warning[2]. But it was used without checking it… with another “don’t use without checking” warning. By whatever logic allows that, the next person should be able to use the material, including quoting or summarizing it, without checking either, so long as they include their own warning. The warnings should be able to keep propagating forever.
… but the real consequences of that are a game of telphone:
An error can get propagated until somebody forgets the warning, or just plain doesn’t feel like including the warning, and then you have false claims of fact circulating with no warning at all. Or the warning deteriorates into “sources claim that”, or “there are rumors that”, or something equally vague that can’t be checked.
Even if the warning doesn’t get lost or removed, tracing back to sources gets harder with each step in the chain.
Many readers will end up remembering whatever they took out of the material, including that it came from a “careful” source (because, hey, they were careful to remind you to check up on them)… but forget that they were told it hadn’t been checked, or underestimate the importance of that.
If multiple people propagate an error, people start seeing it in more than one “independent” source, which really makes them start to think it must be true. It can become “common knowledge”, at least in some circles, and those circles can be surprisingly large.
That pollution of common knowledge is the big problem.
The pollution tends to be even worse because whatever factoid or quote will often get “simplified”, or “summarized”, or stripped of context, or “punched up” at each step. That mutation is itself exacerbated by people not checking references, because if you check references at least you’ll often end up mutating the version from a step or two back, instead of building even higher on top of the latest round of errors.
All of this is especially likely to happen when “personalities” or politics are involved. And even more likely to happen when people feel a sense of urgency about “getting this out there as soon as possible”. Everybody in the chain is going to feel that same sense of urgency.
I have seen situations like that created very intentionally in certain political debates (on multiple different topics, all unrelated to anything Less Wrong generally cares about). You get deep chains of references that don’t quite support what they’re claimed to support, spawning “widely known facts” that eventually, if you do the work, turn out to be exaggerations of admitted wild guesses from people who really didn’t have any information at all. People will even intentionally add links to the chain to give others plausible deniability. I don’t think there’s anything intentional here, but there’s a reason that some people do it intentionally. It works. And you can get away with it if the local culture isn’t demanding rigorous care and checking up at every step.
You can also see this sort of thing as an attempt to claim social prestige for a minimal contribution. After all, it would have been possible to just post the link, or post the link and suggest that everybody get their AI to summarize it. But the main issue is that spreading unverified rumors causes widespread epistemic harm.
The standard for the reader should still be “don’t be sure the references support this unless you check them”, which actually means that when the reader becomes a writer, that reader/writer should actually not only have checked their own references, but also checked the references of their references, before publishing anything.
FWIW, my best guess is the document contains fewer errors than having a human copy-paste things and stitch it together. The errors have a different nature to them, and so it makes sense to flag them, but like, I started out with copy-pasting and OCR, and that did not actually have an overall lower error rate.
OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn’t punish them by giving them even more work.
Resulting in a strong downvote and, honestly, outright anger on my part.
If other people have to check it before they quote it, why is it OK for you not to check it before you post it?
I seem to have gotten a “Why?” on this.
The reason is that checking things yourself is a really, really basic, essential standard of discourse[1]. Errors propagate, and the only way to avoid them propagating is not to propagate them.
If this was created using some standard LLM UI, it would have come with some boilerplate “don’t use this without checking it” warning[2]. But it was used without checking it… with another “don’t use without checking” warning. By whatever logic allows that, the next person should be able to use the material, including quoting or summarizing it, without checking either, so long as they include their own warning. The warnings should be able to keep propagating forever.
… but the real consequences of that are a game of telphone:
An error can get propagated until somebody forgets the warning, or just plain doesn’t feel like including the warning, and then you have false claims of fact circulating with no warning at all. Or the warning deteriorates into “sources claim that”, or “there are rumors that”, or something equally vague that can’t be checked.
Even if the warning doesn’t get lost or removed, tracing back to sources gets harder with each step in the chain.
Many readers will end up remembering whatever they took out of the material, including that it came from a “careful” source (because, hey, they were careful to remind you to check up on them)… but forget that they were told it hadn’t been checked, or underestimate the importance of that.
If multiple people propagate an error, people start seeing it in more than one “independent” source, which really makes them start to think it must be true. It can become “common knowledge”, at least in some circles, and those circles can be surprisingly large.
That pollution of common knowledge is the big problem.
The pollution tends to be even worse because whatever factoid or quote will often get “simplified”, or “summarized”, or stripped of context, or “punched up” at each step. That mutation is itself exacerbated by people not checking references, because if you check references at least you’ll often end up mutating the version from a step or two back, instead of building even higher on top of the latest round of errors.
All of this is especially likely to happen when “personalities” or politics are involved. And even more likely to happen when people feel a sense of urgency about “getting this out there as soon as possible”. Everybody in the chain is going to feel that same sense of urgency.
I have seen situations like that created very intentionally in certain political debates (on multiple different topics, all unrelated to anything Less Wrong generally cares about). You get deep chains of references that don’t quite support what they’re claimed to support, spawning “widely known facts” that eventually, if you do the work, turn out to be exaggerations of admitted wild guesses from people who really didn’t have any information at all. People will even intentionally add links to the chain to give others plausible deniability. I don’t think there’s anything intentional here, but there’s a reason that some people do it intentionally. It works. And you can get away with it if the local culture isn’t demanding rigorous care and checking up at every step.
You can also see this sort of thing as an attempt to claim social prestige for a minimal contribution. After all, it would have been possible to just post the link, or post the link and suggest that everybody get their AI to summarize it. But the main issue is that spreading unverified rumors causes widespread epistemic harm.
The standard for the reader should still be “don’t be sure the references support this unless you check them”, which actually means that when the reader becomes a writer, that reader/writer should actually not only have checked their own references, but also checked the references of their references, before publishing anything.
Perhaps excusable since nobody actually knows how to make the LLM get it right reliably.
FWIW, my best guess is the document contains fewer errors than having a human copy-paste things and stitch it together. The errors have a different nature to them, and so it makes sense to flag them, but like, I started out with copy-pasting and OCR, and that did not actually have an overall lower error rate.
OP did the work to collect these emails and put them into a post. When people do work for you, you shouldn’t punish them by giving them even more work.
Because I said prominently at the top that I used AI assistance for it. Of course, feel free to do the same.