I’d like clarification on using AI as a writing assistant by having a whole conversation with it, then letting it do the primary writing. I’m hoping this meets your criteria of “add significant value”.
I thought Jan Kulveit had real success with this method in A Three-Layer Model of LLM Psychology and AI Assistants Should Have a Direct Line to Their Developers. He credited Claude with the writing without mentioning how much he edited it. I find it plausible that he edited very little because his contribution had been extensive on the “prompting” side. Because it was a conversation, it wasn’t just prompting, but also using the AI as a thinking assistant.
I think something like this method should be encouraged. I think it can actually reduce AI slop if it’s done under the right guidelines. For poor researchers/thinkers, a conversation with an AI that’s prompted to avoid sycophancy and provide other perspectives can lead to them not publishing it at all, or publishing a vastly better-thought-out version. For good researchers or thinkers who aren’t fast or confident writers, it can get important ideas out of the drafts folder and into the world.
A stock prompt included in the guidelines might improve a lot of posts and prevent a lot of others.
I recently tried prompting 4.5 to tell me what a prosaic alignment researcher might think about my post draft. The post is now much better and remains unpublished. I intend to do a lot more of this in the future.
There could actually be a prompt in the guidelines that you ask people to use and report that they used.
Then people don’t need to publish on LW to get feedback on their ideas (which they aren’t going to get anyway if it’s badly written). They got it from the stock prompt. Seeing some guidelines on this and other LW objectives could be obligatory before writing the first maybe three posts on a new account, even if you can just click past it if you insist.
The idea of prompting a model to respond with particular perspectives on a post was a combination of two ideas, one of which came from LW and neither of which were originally mine. I’d love an automated tool to run a bunch of simulated comments before somethings was posted, but the same effect can be had with a little prompting.
I’d like clarification on using AI as a writing assistant by having a whole conversation with it, then letting it do the primary writing. I’m hoping this meets your criteria of “add significant value”.
I thought Jan Kulveit had real success with this method in A Three-Layer Model of LLM Psychology and AI Assistants Should Have a Direct Line to Their Developers. He credited Claude with the writing without mentioning how much he edited it. I find it plausible that he edited very little because his contribution had been extensive on the “prompting” side. Because it was a conversation, it wasn’t just prompting, but also using the AI as a thinking assistant.
I think something like this method should be encouraged. I think it can actually reduce AI slop if it’s done under the right guidelines. For poor researchers/thinkers, a conversation with an AI that’s prompted to avoid sycophancy and provide other perspectives can lead to them not publishing it at all, or publishing a vastly better-thought-out version. For good researchers or thinkers who aren’t fast or confident writers, it can get important ideas out of the drafts folder and into the world.
A stock prompt included in the guidelines might improve a lot of posts and prevent a lot of others.
I recently tried prompting 4.5 to tell me what a prosaic alignment researcher might think about my post draft. The post is now much better and remains unpublished. I intend to do a lot more of this in the future.
There could actually be a prompt in the guidelines that you ask people to use and report that they used.
Then people don’t need to publish on LW to get feedback on their ideas (which they aren’t going to get anyway if it’s badly written). They got it from the stock prompt. Seeing some guidelines on this and other LW objectives could be obligatory before writing the first maybe three posts on a new account, even if you can just click past it if you insist.
The idea of prompting a model to respond with particular perspectives on a post was a combination of two ideas, one of which came from LW and neither of which were originally mine. I’d love an automated tool to run a bunch of simulated comments before somethings was posted, but the same effect can be had with a little prompting.