Seeking Beta Users for LessWrong-Integrated LLM Chat
Comment here if you’d like access. (Bonus points for describing ways you’d like to use it.)
A couple of months ago, a few of the LW team set out to see how LLMs might be useful in the context of LW. It feels like they should be at some point before the end, maybe that point is now. My own attempts to get Claude to be helpful for writing tasks weren’t particularly succeeding, but LLMs are pretty good at reading a lot of things quickly, and also can be good at explaining technical topics.
So I figured just making it easy to load a lot of relevant LessWrong context into an LLM might unlock several worthwhile use-cases. To that end, Robert and I have integrated a Claude chat window into LW, with the key feature that it will automatically pull in relevant LessWrong posts and comments to what you’re asking about.
I’m currently seeking beta users.
Since using the Claude API isn’t free and we haven’t figured out a payment model, we’re not rolling it out broadly. But we are happy to turn it on for select users who want to try it out.
Comment here if you’d like access. (Bonus points for describing ways you’d like to use it.)
I’m hoping to get a PR deployed today that’ll make a few improvements: - narrow the width so doesn’t overlap the post on smaller screens than before - load more posts into the context window by default - upweight embedding distance relative to karma in the embedding search for relevant context to load in - various additions to the system response to improve tone and style
gotcha. what would be the best way to send you feedback? i could do:
comments here
sent directly to you via LW DM, email, [dm through some other means] or something else if that’s better
(while it’s top-of-mind: the feedback that generated this question was that the chat interface pops up every single time open a tab of LW, including every time i open a post in a new tab. this gets really annoying very quickly!)
I’m interested. I once tried a much more rudimentary LW-LLM integration with a GPT-4 Discord bot and it never felt quite right, so I’d be very interested in seeing what a much better version looks like.
I’m interested. I’ll provide feedback, positive or negative, like I have on other site features and proposed changes. I’d be happy to pay on almost any payment model, at least for a little while. I have a Cause subscription fwiw.
I’d use it to speed up researching prior related work on LW for my posts. I spend a lot of time doing this currently.
Interested! I would pay at cost if that was available. I’ll be asking about which posts are relevant to a question, misc philosophy questions and asking for Claude to challenge me, etc. Primarily interested if I can ask for brevity using a custom prompt, in the system prompt.
I’d like beta access. My main use case is that I intend to write up some thoughts on alignment (Manifold gives 40% that I’m proud of a write-up, I’d like that number up), and this would be helpful for literature review and finding relevant existing work. Especially so because a lot of the public agent foundations work is old and migrated from the old alignment forum, where it’s low-profile compared to more recent posts.
Interested! Unsure how I’ll use it; will need to play around with it to figure that out. But in general, I like asking questions while reading things to stay engaged and I’m very interested to see how it goes with an LLM that’s loaded up with LW context.
I’m interested! I, among other usage, hope to use it for finding posts exploring similar topics by different names.
By the way, I have an idea what to use instead of a payment model: interacting with user’s local LLM like one started within LM Studio. That’d require a checkbox/field to enter API URL, some recommendations on which model to use and working out how to reduce amount of content fed into model (as user-run LLM seem to have smaller context windows than needed).
Oh I didn’t see this! I’d like access, in part because its pretty common I try to find a LessWrong post or comment, but the usual search methods don’t work. Also because it seems like a useful way to explore the archives.
I’m interested if you’re still adding folks. I run local rationality meetups, this seems like a potentially interesting way to find readings/topics for meetups (e.g. “find me three readings with three different angles on applied rationality”, “what could be some good readings to juxtapose with burdens by scott alexander”, etc.)
Added! (Can take a few min to activate though) My advice is for each one of those, ask in it in a new separate/fresh chat because it’ll only a do single search per chat.
I’m interested! I’d probably mostly be comparing it to unaugmented Claude for things like explaining ML topics and turning my post ideas into drafts (I don’t expect it to be great at this latter but I’m curious whether having some relevant posts in the context window will elicit higher quality). I also think the low-friction integration might make it useful for clarifying math- or programming-heavy posts, though I’m not sure I’ll want this often.
also think the low-friction integration might make it useful for clarifying math- or programming-heavy posts, though I’m not sure I’ll want this often.
I’d love to have early access. I will probably give feedback on bugs in the implementation before it is rolled out to more users, and am happy to use my own API keys.
You’ve been granted access to the LW LLM Chat prototype!
No need to provide API key (we haven’t even set that up, I was just explaining why we having people manually request access rather than make it immediately available more broadly.
Seeking Beta Users for LessWrong-Integrated LLM Chat
Comment here if you’d like access. (Bonus points for describing ways you’d like to use it.)
A couple of months ago, a few of the LW team set out to see how LLMs might be useful in the context of LW. It feels like they should be at some point before the end, maybe that point is now. My own attempts to get Claude to be helpful for writing tasks weren’t particularly succeeding, but LLMs are pretty good at reading a lot of things quickly, and also can be good at explaining technical topics.
So I figured just making it easy to load a lot of relevant LessWrong context into an LLM might unlock several worthwhile use-cases. To that end, Robert and I have integrated a Claude chat window into LW, with the key feature that it will automatically pull in relevant LessWrong posts and comments to what you’re asking about.
I’m currently seeking beta users.
Since using the Claude API isn’t free and we haven’t figured out a payment model, we’re not rolling it out broadly. But we are happy to turn it on for select users who want to try it out.
Comment here if you’d like access. (Bonus points for describing ways you’d like to use it.)
@Chris_Leong @Jozdien @Seth Herd @the gears to ascension @ProgramCrafter
You’ve all been granted to the LW integrated LLM Chat prototype. Cheers!
Oh, you access it with the sparkle button in the bottom right:
@Neel Nanda @Stephen Fowler @Saul Munn – you’ve been added.
I’m hoping to get a PR deployed today that’ll make a few improvements:
- narrow the width so doesn’t overlap the post on smaller screens than before
- load more posts into the context window by default
- upweight embedding distance relative to karma in the embedding search for relevant context to load in
- various additions to the system response to improve tone and style
great! how do i access it on mobile LW?
Not available on mobile at this time, I’m afraid.
gotcha. what would be the best way to send you feedback? i could do:
comments here
sent directly to you via LW DM, email, [dm through some other means] or something else if that’s better
(while it’s top-of-mind: the feedback that generated this question was that the chat interface pops up every single time open a tab of LW, including every time i open a post in a new tab. this gets really annoying very quickly!)
Cheers! Comments here are good, so is LW DM, or Intercom.
I’m interested. I once tried a much more rudimentary LW-LLM integration with a GPT-4 Discord bot and it never felt quite right, so I’d be very interested in seeing what a much better version looks like.
I’m interested. I’ll provide feedback, positive or negative, like I have on other site features and proposed changes. I’d be happy to pay on almost any payment model, at least for a little while. I have a Cause subscription fwiw.
I’d use it to speed up researching prior related work on LW for my posts. I spend a lot of time doing this currently.
I’d like access.
TBH, if it works great I won’t provide any significant feedback, apart from “all good”
But if it annoys me in any way I’ll let you know.
For what it’s worth, I have provided quite a bit of feedback about the website in the past.
I want to see if it helps me with my draft document on proposed alignment solutions:
https://docs.google.com/document/d/1Mis0ZxuS-YIgwy4clC7hKrKEcm6Pn0yn709YUNVcpx8/edit#heading=h.u9eroo3v6v28
Sounds good! I’d recommend pasting in the actual contents together with a description of what you’re after.
Interested! I would pay at cost if that was available. I’ll be asking about which posts are relevant to a question, misc philosophy questions and asking for Claude to challenge me, etc. Primarily interested if I can ask for brevity using a custom prompt, in the system prompt.
I’d like beta access. My main use case is that I intend to write up some thoughts on alignment (Manifold gives 40% that I’m proud of a write-up, I’d like that number up), and this would be helpful for literature review and finding relevant existing work. Especially so because a lot of the public agent foundations work is old and migrated from the old alignment forum, where it’s low-profile compared to more recent posts.
Added!
I’d be interested! I would also love to see the full answer to why people care about SAEs
Added! That’s been one of my go-to questions for testing variations of the system, I’d suggest just trying it yourself.
I’d like access to it.
I’m interested! Also curious as to how this is implemented; are you using retrieval-augmented generation, and if so, with what embeddings?
You are added!
Claude 3.5 Sonnet is the chat client, and yes, with RAG using OpenAI
text-embedding-3-large
for embeddings.Interested! Unsure how I’ll use it; will need to play around with it to figure that out. But in general, I like asking questions while reading things to stay engaged and I’m very interested to see how it goes with an LLM that’s loaded up with LW context.
Added!
i’d love access! my guess is that i’d use it like — elicit:research papers::[this feature]:LW posts
i’m interested in using it for literature search
I’ll add you now, though I’m in the middle of some changes that should make it better for lit search.
I’m interested! I, among other usage, hope to use it for finding posts exploring similar topics by different names.
By the way, I have an idea what to use instead of a payment model: interacting with user’s local LLM like one started within LM Studio. That’d require a checkbox/field to enter API URL, some recommendations on which model to use and working out how to reduce amount of content fed into model (as user-run LLM seem to have smaller context windows than needed).
Oh I didn’t see this! I’d like access, in part because its pretty common I try to find a LessWrong post or comment, but the usual search methods don’t work. Also because it seems like a useful way to explore the archives.
Added!
I’d also love to have access!
Added!
I’d love to try it, mainly thinking about research (agent foundations and AI safety macrostrategy).
Your access should be activated within 5-10 minutes. Look for the button in the bottom right of the screen.
I’m interested if you’re still adding folks. I run local rationality meetups, this seems like a potentially interesting way to find readings/topics for meetups (e.g. “find me three readings with three different angles on applied rationality”, “what could be some good readings to juxtapose with burdens by scott alexander”, etc.)
Added! (Can take a few min to activate though) My advice is for each one of those, ask in it in a new separate/fresh chat because it’ll only a do single search per chat.
I’m interested! I’d probably mostly be comparing it to unaugmented Claude for things like explaining ML topics and turning my post ideas into drafts (I don’t expect it to be great at this latter but I’m curious whether having some relevant posts in the context window will elicit higher quality). I also think the low-friction integration might make it useful for clarifying math- or programming-heavy posts, though I’m not sure I’ll want this often.
You now have access to the LW LLM Chat prototype!
That’s actualy one of my favorite use-cases
I’d love to have early access. I will probably give feedback on bugs in the implementation before it is rolled out to more users, and am happy to use my own API keys.
You’ve been granted access to the LW LLM Chat prototype!
No need to provide API key (we haven’t even set that up, I was just explaining why we having people manually request access rather than make it immediately available more broadly.