I wrote some notes on how we’ve been working to keep UI simpler, but habryka beat me to it. Meanwhile:
Some thoughts Re: LLM integration
I don’t think we’ll get to agreement within this comment margin. I think there’s a lot of ways LLM integration can go wrong. I think the first-pass at the JargonBot Beta Test isn’t quite right yet and I hope to fix some of that soon to make it a bit more clear what it looks like when it’s working well, as proof-of-concept.
But, I think LLM integration is going to be extremely important, and I want to say a bit about it.
Most of what LLMs enable is entirely different paradigms of cognition, that weren’t possible before. This is sort of a “inventing cars while everyone is still asking for slightly better-horses, or being annoyed by the car-centric infrastructure that’s starting to roll out in fits and starts. Horses worked fine, what’s going on?”
I think good LLM integrations make the difference between “it’s exhausting and effortful to read a technical post in a domain you aren’t familiar with” (and therefore, you don’t bother) to “actually it’s not that much harder than reading a regular post.” (I think several UI challenges need to get worked out for this to work, but they are not particularly impossible UI challenges). This radically changes the game on what sort of stuff you can learn, and how quickly somewhat who is somewhat interested in a field can get familiar with it. You can just jump into the post that feels relevant, and have the gaps between your understanding and the cutting edge filled in automatically (instead of having to painstakingly figure out the basics of a field before you can start participating).
Once this is working reliably and you actually deeply believe in it, it opens up new atomic actions that you brain can automatically consider that would previously have been too expensive to be worth it.
I don’t think we even need advances on current LLM-skill for this to work pretty well – LLMs aren’t very good at figuring stuff out at the cutting edge, but they are pretty good at filling it details that get you up to speed on the basics, and I think it’s pretty obvious how to improve them along the edges here.
This is in addition to the very straightforward LLM-integrations into an editor that save obvious boring bits of work (identifying all typos and slight wording confusions and predictably hard-to-understand sections) and freeing up that attention for more complicated problem solving.
I think it’s important for LessWrong in particular to be at the forefront here, because there are gnarly important bottlenecking-for-humanity’s-future problems, that require people to skill up rapidly to have a hope of contributing in time. (My inspiration was a colleague kind of casually deciding “I think I’m going to learn about the technical problems underlying compute governance”, and spinning up into the field so they could figure out how to contribute)
I wrote some notes on how we’ve been working to keep UI simpler, but habryka beat me to it. Meanwhile:
Some thoughts Re: LLM integration
I don’t think we’ll get to agreement within this comment margin. I think there’s a lot of ways LLM integration can go wrong. I think the first-pass at the JargonBot Beta Test isn’t quite right yet and I hope to fix some of that soon to make it a bit more clear what it looks like when it’s working well, as proof-of-concept.
But, I think LLM integration is going to be extremely important, and I want to say a bit about it.
Most of what LLMs enable is entirely different paradigms of cognition, that weren’t possible before. This is sort of a “inventing cars while everyone is still asking for slightly better-horses, or being annoyed by the car-centric infrastructure that’s starting to roll out in fits and starts. Horses worked fine, what’s going on?”
I think good LLM integrations make the difference between “it’s exhausting and effortful to read a technical post in a domain you aren’t familiar with” (and therefore, you don’t bother) to “actually it’s not that much harder than reading a regular post.” (I think several UI challenges need to get worked out for this to work, but they are not particularly impossible UI challenges). This radically changes the game on what sort of stuff you can learn, and how quickly somewhat who is somewhat interested in a field can get familiar with it. You can just jump into the post that feels relevant, and have the gaps between your understanding and the cutting edge filled in automatically (instead of having to painstakingly figure out the basics of a field before you can start participating).
Once this is working reliably and you actually deeply believe in it, it opens up new atomic actions that you brain can automatically consider that would previously have been too expensive to be worth it.
I don’t think we even need advances on current LLM-skill for this to work pretty well – LLMs aren’t very good at figuring stuff out at the cutting edge, but they are pretty good at filling it details that get you up to speed on the basics, and I think it’s pretty obvious how to improve them along the edges here.
This is in addition to the very straightforward LLM-integrations into an editor that save obvious boring bits of work (identifying all typos and slight wording confusions and predictably hard-to-understand sections) and freeing up that attention for more complicated problem solving.
I think it’s important for LessWrong in particular to be at the forefront here, because there are gnarly important bottlenecking-for-humanity’s-future problems, that require people to skill up rapidly to have a hope of contributing in time. (My inspiration was a colleague kind of casually deciding “I think I’m going to learn about the technical problems underlying compute governance”, and spinning up into the field so they could figure out how to contribute)