The obvious advice is of course “whatever thing you want to learn, let an LLM help you learn it”. Throw that post in the context window, zoom in on terms, ask it to provide examples in the way the author intended it, let it generate exercises, let it rewrite it for your reading level.
If you’re already doing that and it’s not helping, maybe… more dakka? And you’re going to have to expand on what your goals are and what you want to learn/make.
Thanks. That’s actually the technique that made me feel like the burden of membership might be manageable this time around… (minus the “reading level” thing. -_-; I went to college when I was 17 and wrote a philo paper that got accused of plagiarism with no plagiaree. My problem is working memory, and most of the content here is inefficient AF because complexification is a karma attractor.)
What’s better than a language model, though, is a good autostructure. I like Artificially Aware on YouTube (https://youtu.be/-P97YNmTUL4). It makes esoteric info really accessible, and all you have to do is wish for it. I actually forgot all about these comments until I saw the video on The Bell Curve and remembered that I walked away from a half-written response.
I don’t really know what I’m doing here, honestly. I just follow research compulsions 90% of the time and see where it leads me.
The obvious advice is of course “whatever thing you want to learn, let an LLM help you learn it”. Throw that post in the context window, zoom in on terms, ask it to provide examples in the way the author intended it, let it generate exercises, let it rewrite it for your reading level.
If you’re already doing that and it’s not helping, maybe… more dakka? And you’re going to have to expand on what your goals are and what you want to learn/make.
Thanks. That’s actually the technique that made me feel like the burden of membership might be manageable this time around… (minus the “reading level” thing. -_-; I went to college when I was 17 and wrote a philo paper that got accused of plagiarism with no plagiaree. My problem is working memory, and most of the content here is inefficient AF because complexification is a karma attractor.)
What’s better than a language model, though, is a good autostructure. I like Artificially Aware on YouTube (https://youtu.be/-P97YNmTUL4). It makes esoteric info really accessible, and all you have to do is wish for it. I actually forgot all about these comments until I saw the video on The Bell Curve and remembered that I walked away from a half-written response.
I don’t really know what I’m doing here, honestly. I just follow research compulsions 90% of the time and see where it leads me.
For future reference, the correct answer to my question was probably this: https://www.lesswrong.com/tags/all
I love a good index.