I agree with your intended point, but disagree with it as stated. There are certainly ways to use AI to write for you well. Claude Opus is a much more natural writer than GPT-4o, especially with more than simple prompting. Even to the specific points of conciseness, filler phrases, equivocating, and inordinately high proportions of applause-light text.
In the extreme, Janus sometimes simulates conversational branches with a base LLM before having the conversation, and copies text over from the AI’s completions during the conversation. I’ve certainly never been able to tell reliably when I’m speaking to human-Janus or AI-Janus.
AIUI Janus mostly uses their(?) Loom interface, which allows extremely fine-grained control over the outputs; in my experience using the less-powerful free chat interface, Claude tends to fall into similar failure modes as 4o when I ask it to flesh out my ideas, albeit to a lesser extent. It’ll often include things like calls to action, claims that the (minor and technical) points I want to make have far-reaching social implications of which we must be aware, etc (and is prone to injecting the perspective that AIs are definitely not conscious in response to prompts that did not include any instructions of that nature).
Loom is definitely far more powerful, but there are other (weaker) ways of steering the outputs toward specific parts of the latent space; things which often fall under the label of “prompt engineering”, which is very commonly broader than the usual usage of the term. Janus’ twitter feed, for example, has some examples of LLMs acting in ways that would, I think, be very strange to someone who’s only seen them act the way they do at the start of a conversation. (Not in being specifically better at the things you describe in those examples, but I think they’re similarly different from its usual style.)
I agree with your intended point, but disagree with it as stated. There are certainly ways to use AI to write for you well. Claude Opus is a much more natural writer than GPT-4o, especially with more than simple prompting. Even to the specific points of conciseness, filler phrases, equivocating, and inordinately high proportions of applause-light text.
In the extreme, Janus sometimes simulates conversational branches with a base LLM before having the conversation, and copies text over from the AI’s completions during the conversation. I’ve certainly never been able to tell reliably when I’m speaking to human-Janus or AI-Janus.
AIUI Janus mostly uses their(?) Loom interface, which allows extremely fine-grained control over the outputs; in my experience using the less-powerful free chat interface, Claude tends to fall into similar failure modes as 4o when I ask it to flesh out my ideas, albeit to a lesser extent. It’ll often include things like calls to action, claims that the (minor and technical) points I want to make have far-reaching social implications of which we must be aware, etc (and is prone to injecting the perspective that AIs are definitely not conscious in response to prompts that did not include any instructions of that nature).
Loom is definitely far more powerful, but there are other (weaker) ways of steering the outputs toward specific parts of the latent space; things which often fall under the label of “prompt engineering”, which is very commonly broader than the usual usage of the term. Janus’ twitter feed, for example, has some examples of LLMs acting in ways that would, I think, be very strange to someone who’s only seen them act the way they do at the start of a conversation. (Not in being specifically better at the things you describe in those examples, but I think they’re similarly different from its usual style.)