Which LessWrong/Alignment topics would you like to be tutored in? [Poll]
Would you like to be tutored in applied game theory, natural latents, CFAR-style rationality techniques, “general AI x-risk”, Agent Foundations, anthropics, or some other topics discussed on LessWrong?
I’m thinking about prototyping some topic-specific LLM tutor bots, and would like to prioritize topics that multiple people are interested in.
Topic-specific LLM tutors would be customized with things like pre-loaded relevant context, helpful system prompts, and more focused testing to ensure they work.
Note: I’m interested in topics that are written about on LessWrong, e.g. infra-bayesianism, and not magnetohydrodynamics”.
I’m going to use the same poll infrastructure that Ben Pace pioneered recently. There is a thread below where you add and vote on topics/domains/areas where you might like tutoring.
Karma: upvote/downvote to express enthusiasm about there being tutoring for a topic.
Reacts: click on the agree react to indicate you personally would like tutoring on a topic.
New Poll Option. Add a new topic for people express interest in being tutored on.
For the sake of this poll, I’m more interested in whether you’d like tutoring on a topic or not, separate from the question of whether you think a tutoring bot would be any good. I’ll worry about that part.
Background
I’ve been playing around with LLMs a lot in the past couple of months and so far my favorite use case is tutoring. LLM-assistance is helpful via multiple routes such as providing background context with less effort than external search/reading, keeping me engaged via interactivity, generating examples, and breaking down complex sections into more digestible pieces.
what I want for rationality techniques is less a tutor and more of an assertive rubber duck walking me through things when capacity is scarce.
Poll for LW topics you’d like to be tutored in
(please use agree-react to indicate you’d personally like tutoring on a topic, I might reach out if/when I have a prototype)
Note: Hit cmd-f or ctrl-f (whatever normally opens search) to automatically expand all of the poll options below.
CFAR-style Rationality Techniques
Writing well
Decision Theory
Infra-Bayesianism
Applied Game Theory
Natural Latents
Agent Foundations
Applying decision theory to scenarios involving mutually untrusting agents.
Anthropics
What is the status of this project? Are there any estimates of timelines?