LessWrong is becoming more and more specialised, thus scaring off any newcomers who aren’t interested in AI.
Yup, sorry.
Do any of the posts on LessWrong make any difference in the general psychosphere of AI alignment?
Sometimes. E.g. the Waluigi effect post was in March, and I’ve seen that mentioned by random LLM users. CNN had Conor Leahy on as an AI expert about the same time, and news coverage about Bing chat sometimes glossed Evan Hubinger’s post about it.
Does Sam Altman or anyone at OpenAI engage with LessWrongers?
Yeah. And I don’t just mean on Twitter, I mean it’s kinda hard not to talk to e.g. Jan Leike when he works there.
What are some basic beginner resources someone can use to understand the flood of complex AI posts currently on the front page?
Yeah, this is pretty tricky, because fields accumulate things you have to know to be current in them. For “what’s going on with AI in general” there are certainly good posts on diverse topics here, but nothing as systematic and in-depth as a textbook. I’d say just look for generically good resources to learn about AI, learning theory, neural networks. Some people have reading lists (e.g. MIRI, Vanessa Kosoy, John Wentworth), many of which are quite long and specialized—obviously how deep down the rabbit hole you go depends on what you want to learn. For alignment topics various people have made syllabi (This one, I think primarily based on Richard Ngo’s, is convenient to recommend despite occasional disagreements)
Out of curiosity then, do people use the articles here as part of bigger articles on other academic journals? Is this place sort of the ‘launching pad’ for ideas and raw data?
Yup, sorry.
Sometimes. E.g. the Waluigi effect post was in March, and I’ve seen that mentioned by random LLM users. CNN had Conor Leahy on as an AI expert about the same time, and news coverage about Bing chat sometimes glossed Evan Hubinger’s post about it.
Yeah. And I don’t just mean on Twitter, I mean it’s kinda hard not to talk to e.g. Jan Leike when he works there.
Yeah, this is pretty tricky, because fields accumulate things you have to know to be current in them. For “what’s going on with AI in general” there are certainly good posts on diverse topics here, but nothing as systematic and in-depth as a textbook. I’d say just look for generically good resources to learn about AI, learning theory, neural networks. Some people have reading lists (e.g. MIRI, Vanessa Kosoy, John Wentworth), many of which are quite long and specialized—obviously how deep down the rabbit hole you go depends on what you want to learn. For alignment topics various people have made syllabi (This one, I think primarily based on Richard Ngo’s, is convenient to recommend despite occasional disagreements)
Thanks for that.
Out of curiosity then, do people use the articles here as part of bigger articles on other academic journals? Is this place sort of the ‘launching pad’ for ideas and raw data?