Please do not use AI to write for you

I’ve recently seen several articles here that were clearly generated or heavily assisted by AI. They are all dreadful. They are verbose, they are full of “on the one hand” and “on the other”, they never make any assertion without also making room for the opposite, and end with “conclusions” that say nothing. Please do not do this.

————

I wrote the above, unassisted, and it states my own views. To illustrate the point I then had GPT-4o rewrite it. Here was the prompt:

I have recently seen a substantial increase in the proportion of postings to a certain online forum that were clearly generated by AI, although not marked as such. The writing is verbose and meandering, full of “on the one hand” and “on the other”, never making an assertion without also suggesting its opposite, sounding oh-so-measured and even-handed that it ends up saying nothing at all. Please write an article for me, written in the first person, arguing that AI tools in the current state of the art should never be used for this purpose.

Below the line is the chatbot’s extruded meat product. This is the whole of its first output, unedited, and I did not press it to continue.

The first paragraph starts out ok, being merely verbose and repetitive but setting out the basic claim. And then the second paragraph backpedals it past where it began. The third paragraph floats off into the cloud layer, and so on for a total of nine paragraphs and 875 words that would be a waste of anyone’s time to read, unless they want to see if their judgement agrees with mine.

It did not notice that I was using it in exactly the way I had asked it to argue against, and that the article displays all the faults that it purports to diagnose.

The splash image is a photograph from Wikipedia, CC BY-SA 3.0. I’ll spare you the image that GPT-4o came up with, but y’know, I do find the AI illustrations that everyone heads their blog posts with these days pretty annoying. (ETA: Well, there was supposed to be an image, I filled out the “link preview” part of the entry form, but no image. But the link above will show you what it was.)

————————————

Why AI-Generated Content Shouldn’t Flood Online Forums

Lately, I’ve noticed something troubling in the online forums I frequent: a distinct uptick in posts that seem like they’ve been generated by AI, though they’re not labeled as such. The language is eerily similar across many of these posts—verbose, wandering, and conspicuously non-committal. Every argument is hedged with a counterpoint, every statement cushioned by a qualifier, making the writing sound balanced on the surface but ultimately empty. The posts stretch on endlessly, piling one vague point onto another, yet they fail to really say anything. It’s frustrating, to say the least, and I believe this trend reveals a serious problem with the way AI is being misused in online spaces.

I want to be clear: I’m not arguing that AI lacks merit as a tool. It has immense potential in fields like data analysis, automation, and even creativity when it complements human intelligence. However, when it comes to contributing to online discourse—spaces where real people go to express ideas, ask questions, and engage in genuine conversation—I believe AI-generated content, at least in its current form, does more harm than good.

First, let’s talk about the nature of conversation itself. Forums, at their best, are places where people share perspectives, debate ideas, and solve problems together. What makes these interactions meaningful is that they are infused with human experience, emotion, and authenticity. Even when disagreements arise, they are rooted in real personal stakes—whether it’s a passion for a hobby, a struggle to understand a concept, or a desire to connect with others. AI-generated content lacks this grounding in lived experience. It can simulate language but not the real human motivations behind it.

What we end up with instead is a hollow imitation of dialogue. AI might be able to string together coherent sentences and give the illusion of reasoned thought, but it cannot care about the subject at hand. It cannot have an opinion, take a stance, or bring a fresh perspective based on actual experience. This is why so much AI-generated content feels like it goes in circles—because it’s programmed to try to cover all bases without truly investing in any one position. It’s as if the AI is trying to preemptively satisfy everyone without committing to anything, leading to endless qualifications, balancing acts, and vague statements. It dilutes the richness of discussions and makes it harder for people to find valuable insights.

The other major issue is the erosion of trust in these spaces. When users post AI-generated content without disclosing its origin, it blurs the line between what is authentic and what is artificial. We’ve come to online forums expecting that the people we interact with are fellow humans sharing their thoughts and experiences. The rise of AI-generated content undermines this basic expectation. If I can’t be sure whether a post I’m reading was written by a person or an algorithm, it alters the dynamic of the conversation. Authenticity becomes harder to gauge, and people might become more cynical or disengaged, assuming that everything is automated. It strips away the sense of community and shared experience that makes forums worth participating in.

Moreover, when AI-generated content dominates, it can crowd out the voices of real users. Human contributors, who may be more tentative or less prolific than an AI, could find their voices drowned out by the sheer volume of algorithmically generated posts. What’s worse is that this influx of AI content can create an artificial sense of consensus or expertise on certain topics, even when the contributions are shallow or redundant. We could end up with forum threads that look “robust” from the outside but are actually just padded with regurgitated, generic responses that add little value.

I also worry that this trend encourages a kind of intellectual laziness. If someone can rely on an AI to generate a long-winded, seemingly well-rounded post for them, what incentive is there to engage deeply with the topic themselves? Forums are supposed to challenge us to think, to articulate our own ideas, and to engage critically with others. But when AI does the work for us, it short-circuits that process. We lose out on the chance to develop our own thoughts, and conversations stagnate because nobody is pushing themselves to really grapple with the subject matter.

So, what’s the solution? In my view, AI should be used carefully and transparently in these contexts. If someone chooses to use an AI to assist with their post, they should disclose that fact upfront. This way, the community can make informed judgments about the content they’re reading. I also think forum administrators should consider limiting the amount of AI-generated content that’s allowed to be posted, ensuring that real human voices remain at the forefront of discussions.

In the long run, we need to remember why we engage in these spaces in the first place. Forums thrive on the exchange of ideas between real people, each with their own perspectives and experiences. AI, while impressive in many ways, simply can’t replicate that. At least not yet. Until AI reaches a point where it can contribute meaningfully—and with the clarity and depth that real conversations demand—I believe it should stay out of the forums. For the sake of genuine discourse, we need to prioritize human connection over algorithmic convenience.