Based on low-quality articles that seem to be coming up with more regularity, and as mentioned in a few recent posts, AI-generated posts are likely to be a permanent feature of LW (and most online forums, I expect). I wonder if we should focus on harm reduction (or actual value creation, in some cases) rather than trying to disallow something that clearly people want to do.
I wonder how feasible it would be to have a LessWrong-specific workflow for using any or all of the major platforms to assist with (and not fully write) a LW question, a LW summary-of-research post, or a LW rationalist-exploration-of-a-question post (and/or others). This could simply be a help page with sample prompts for “how to generate and use a summary paragraph”, “how to generate and modify an outline/thesis sketch”, and “how to use the summary and outline to flesh out your ideas on a subtopic”.
I’ve played with these techniques, but I tend to do it all in my captive meatware LLM rather than using an external one, so I don’t have a starter example. Do any of you?
We are actually currently in a sprint where we are experimenting with integrating LLM systems directly into LW in various ways.
A thing I’ve been thinking about is to build tools on LW so that it’s easy to embed LLM generated content, but in a way where any reader can see the history of how that content was generated (i.e. seeing whatever prompt or conversational history lead to that output). My hope would be that instead of people introducing lots of LLM-slop and LLM-style-errors into their reasoning, the experience of the threads where people use LLMs becomes more one of “collectively looking at the output of the LLM and being curious about why it gave that answer”, which I feel like has better epistemic and discourse effects.
We’ve also been working with base models and completion models where the aim is to get LLM output that really sound like you and picks up on your reasoning patterns, and also more broadly understands what kind of writing we want on LW.
This is all in relatively early stages but we are thinking pretty actively about it.
That’s awesome. One of my worries about this (which applies to most harm-reduction programs) is that I’d rather have less current-quality-LLM-generated stuff on LW overall, and making it a first-class feature makes it seem like I want more of it.
Having a very transparent not-the-same-as-a-post mechanism solves this worry very well.
I don’t think there should be a help page that says “This is the official LW way to generate a summary paragraph” but at the same time I would appreciate individual users sharing knowledge about how they might use LLMs for the task.
LLM capabilities differ and evolve quite fast, so a help page might be out of date pretty soon.
One example is how to deal with sources. I recently wanted to explore a question about California banning the use of hypnosis for medical purposes by non-licensed people. ChatGPT was able to give me actual links to the relevant portions of the legal code.
Claude was not able to do that, and I think there a good chance that the capability is quite recent in ChatGPTs history and comes with their push for building their own search engine.
As far as standard prompts go I would expect that something along the lines of “What are the most likely objections people on LessWrong are going to have on LessWrong to the following post I want to write and what’s the merit of those objections: ‘… My draft …’, to be a prompt that would be good to run for most people before they publish a post.
I’m not sure if we have formally written guidelines about the purpose of LW and what is appropriate to post here, but there certainly have been postings to that effect. Would such guidelines themselves be a suitable prompt?
In other words, make it easy for people to ask the bot to write what LW wants to see.