On the topic of GPT-3 as a writing assistant: do you use Loom? It sounds like it would suit your workflow pretty well. It lets you generate a bunch of completions at once and displays them as child nodes from a prompt, automates the process of finding branching points where the model might explore a different thought and lets you collapse the view of your writing from a tree to an openai-playground style interface.
But I think someone could probably make something better, like “codex but for writing”. Though it seems unlikely it would have the same features as codex, as that is built for coding. Writing does have a different workflow, and automating away as much of that workflow as can be managed without sacrificing quality would result in some rather diffrent features. As GPT-3 would be used to replicate these processes, and it is by design a simulator of human text which is largely writing, I suspect GPT-3 for writing would automate more parts of a writer’s workflow than a programmer’s.
Since you use, GPT-3 to write, I’m curious if you have any thoughts on the topic. What features would enhance your interactions with GPT-3? Greater context length? The ability to see how your readership would summarise or critique your text? Inserting your off the cuff thoughts into the most relevant area of your essay/notes?
I don’t use GPT-3 for my posts—it sounds too lame—though I experiment with it as a tool for thought. There are cool new projects coming up that will improve the workflow.
Context length is of course a big thing that needs to improve. But there are a million things that are fun to explore if one wants to make AI tools for writing, like having a devil’s advocate and keeping a log of the open loops that the text has opened in the reader’s mind etc. Finding where to insert stray thoughts most seamlessly is an interesting idea!
Practical and clear, what a combo.
On the topic of GPT-3 as a writing assistant: do you use Loom? It sounds like it would suit your workflow pretty well. It lets you generate a bunch of completions at once and displays them as child nodes from a prompt, automates the process of finding branching points where the model might explore a different thought and lets you collapse the view of your writing from a tree to an openai-playground style interface.
But I think someone could probably make something better, like “codex but for writing”. Though it seems unlikely it would have the same features as codex, as that is built for coding. Writing does have a different workflow, and automating away as much of that workflow as can be managed without sacrificing quality would result in some rather diffrent features. As GPT-3 would be used to replicate these processes, and it is by design a simulator of human text which is largely writing, I suspect GPT-3 for writing would automate more parts of a writer’s workflow than a programmer’s.
Since you use, GPT-3 to write, I’m curious if you have any thoughts on the topic. What features would enhance your interactions with GPT-3? Greater context length? The ability to see how your readership would summarise or critique your text? Inserting your off the cuff thoughts into the most relevant area of your essay/notes?
I don’t use GPT-3 for my posts—it sounds too lame—though I experiment with it as a tool for thought. There are cool new projects coming up that will improve the workflow.
Context length is of course a big thing that needs to improve. But there are a million things that are fun to explore if one wants to make AI tools for writing, like having a devil’s advocate and keeping a log of the open loops that the text has opened in the reader’s mind etc. Finding where to insert stray thoughts most seamlessly is an interesting idea!