Upvoted and agreed, but the ship has sailed. We need to help people use LLM article generation well, to help them focus and express their thoughts, rather than just making longer versions of a brief thesis. This is usally a multi-step process, generating and iterating on an outline, then individually generating and iterating on each section (of a paragraph or two). The iteration—human control of the conciseness and correctness of each portion—is absolutely key.
As a low-effort very small change to your prompt to demonstrate the variance, I did a one-shot with GPT 4o that specified LessWrong and added an instruction for tone.
I have recently seen a substantial increase in the proportion of postings to the LessWrong online forum that were clearly generated by AI, although not marked as such. The writing is verbose and meandering, full of “on the one hand” and “on the other”, never making an assertion without also suggesting its opposite, sounding oh-so-measured and even-handed that it ends up saying nothing at all. Please write a brief article for me, written in the first person, arguing that AI tools in the current state of the art should not be used for this purpose. Keep it factual and direct, without unnecessary elaboration or verbosity.
and got
I’ve noticed a growing trend on the LessWrong forum where AI-generated content is becoming more prevalent, and it’s not marked as such. This raises significant concerns, especially when the writing is verbose, indecisive, and ultimately devoid of substance. The hallmark of these posts is a constant balancing act—presenting multiple sides of an argument without ever committing to a clear stance. This approach, while superficially measured, often results in content that says nothing meaningful.
AI tools, as they currently stand, are excellent at generating text that mimics human writing, but they lack the ability to produce genuinely insightful or novel ideas. Their output tends to be safe, repetitive, and lacking in depth because they are designed to avoid controversy and adhere to patterns seen in their training data. This often leads to the production of content that feels empty—words strung together in a way that looks polished but fails to engage or challenge the reader.
The problem with using AI for content creation in thoughtful forums like LessWrong is that it undermines the quality of discussion. The platform thrives on clear, well-reasoned arguments that push the boundaries of rational thought. When AI-generated content is introduced into this space without proper identification, it dilutes the discourse. Readers are left sifting through noise to find genuine contributions, which can erode the overall value of the community.
In its current state, AI should not be used for generating posts on platforms that prioritize intellectual rigor and meaningful debate. Until AI can contribute something more than surface-level analysis and pseudo-argumentation, its use in these contexts should be reconsidered. Identifying AI-generated content clearly when it is used is also essential to maintain the integrity of the forum and respect the expectations of its users.
This is much better (IMO) than your initial attempt, and I believe it could be much better with more significant effort into multi-shot prompt engineering and light edits afterward.
I’d rephrase your title as “If you’re going to use AI to write for you, use it as a fairly limited assistant rather than an open-ended delegation of responsibility”. People need to put in the effort to clarify their own thoughts and put them clearly into text—AI is really helpful for grammar and the mechanics of writing, and not (yet) any good at conceptual presentation that’s not basically a summary of something else.
Here’s what I got out of Llama-3-405b, after adding one sentence to your prompt (at the end) to adjust the style. Prompt:
I have recently seen a substantial increase in the proportion of postings to the LessWrong online forum that were clearly generated by AI, although not marked as such. The writing is verbose and meandering, full of “on the one hand” and “on the other”, never making an assertion without also suggesting its opposite, sounding oh-so-measured and even-handed that it ends up saying nothing at all. Please write a brief article for me, written in the first person, arguing that AI tools in the current state of the art should not be used for this purpose. Keep it factual and direct, without unnecessary elaboration or verbosity. Write in text, dense with ideas but using simple words, avoid lists of points or one-sentence paragraphs, avoid rare and fancy-sounding words.
Llama-3-405b:
I’m concerned about the growing presence of AI-generated content on online forums. Recently, I’ve noticed a surge in posts that seem to be written by AI tools, but aren’t labeled as such. These posts are easy to spot due to their overly cautious and ambiguous tone. They often use phrases like “on the one hand” and “on the other” to present multiple viewpoints without taking a clear stance.
This type of content may seem harmless, but I believe it’s detrimental to online discussions. When AI tools generate posts that aim to sound measured and even-handed, they end up lacking substance. They fail to contribute meaningful ideas or insights, instead muddling the conversation with vague statements. This can lead to confusion and frustration among human participants, who may struggle to engage with the ambiguous arguments presented.
Moreover, AI-generated content can undermine the value of online forums. These platforms rely on the exchange of genuine ideas and perspectives to foster learning and growth. When AI tools masquerade as human contributors, they dilute the quality of the discussion and erode trust among community members. It’s essential to maintain transparency and authenticity in online interactions, and AI-generated content can compromise these values.
Currently, AI tools are not sophisticated enough to replicate the nuance and depth of human thought. They can process vast amounts of information, but they lack the contextual understanding and emotional intelligence that humans take for granted. As a result, AI-generated content often sounds stilted and unnatural, lacking the flair and personality that makes human writing engaging.
In my opinion, AI tools should not be used to generate content for online forums in their current state. While they may be useful for tasks like data analysis or language translation, they are not yet capable of producing high-quality, engaging content that adds value to online discussions. By relying on human contributors, we can ensure that online forums remain vibrant, informative, and authentic.
Upvoted and agreed, but the ship has sailed. We need to help people use LLM article generation well, to help them focus and express their thoughts, rather than just making longer versions of a brief thesis. This is usally a multi-step process, generating and iterating on an outline, then individually generating and iterating on each section (of a paragraph or two). The iteration—human control of the conciseness and correctness of each portion—is absolutely key.
As a low-effort very small change to your prompt to demonstrate the variance, I did a one-shot with GPT 4o that specified LessWrong and added an instruction for tone.
and got
This is much better (IMO) than your initial attempt, and I believe it could be much better with more significant effort into multi-shot prompt engineering and light edits afterward.
I’d rephrase your title as “If you’re going to use AI to write for you, use it as a fairly limited assistant rather than an open-ended delegation of responsibility”. People need to put in the effort to clarify their own thoughts and put them clearly into text—AI is really helpful for grammar and the mechanics of writing, and not (yet) any good at conceptual presentation that’s not basically a summary of something else.
Here’s what I got out of Llama-3-405b, after adding one sentence to your prompt (at the end) to adjust the style. Prompt:
Llama-3-405b: