I’m confused by the sudden upsurge in AI content. People in technical AI alignment are there because they already had strong priors that AI capabilities are growing fast. They’re aware of major projects. I doubt DALL-E threw a brick through Paul Christiano’s window, Eliezer Yudkowsky’s window, or John Wentworth’s window. Their window was shattered years ago.
Here are some possible explanations for the proliferation of AI safety content. As a note, I have no competency in AI safety and haven’t read the posts. These are questions, not comments on the quality of these posts!
Is this largely amateur work by novice researchers who did have their windows shattered just recently, and are writing frantically as a result?
Are we seeing the fruits of investments in training a cohort of new AI safety researchers that happened to ripen just when DALL-E dropped, but weren’t directly caused by DALL-E?
Is this the result of current AI safety researchers working extra hard?
Are technical researchers posting things here that they’d normally keep on the AI alignment forum because they see the increased interest?
Is this a positive feedback loop where increased AI safety posts lead to people posting more AI safety posts?
Is it inhibition, where the proliferation of AI safety posts make potential writers feel crowded out? Or do AI safety posts bury the non-AI safety posts, lowering their clickthrough rate, so authors get unexpectedly low karma and engagement and therefore post less?
I am not in AI safety research and have no aptitude or interest in following the technical arguments. I know these articles have other outlets. So for me, these articles are strictly an inconvenience on the website, ignoring the pressingness of the issue for the world at large. I don’t resent that. But I do experience the “inhibition” effect, where I feel skittish about posting non-AI safety content because I don’t see others doing it.
I don’t believe there was a strategic update in favor of reducing secrecy at MIRI. My model is that everything that they said would be secret, is still secret. The increase in public writing is not because it became more promising, but because all their other work became less.
Is this largely amateur work by novice researchers who did have their windows shattered just recently, and are writing frantically as a result?
and
Are we seeing the fruits of investments in training a cohort of new AI safety researchers that happened to ripen just when DALL-E dropped, but weren’t directly caused by DALL-E?
Are we seeing the fruits of investments in training a cohort of new AI safety researchers that happened to ripen just when DALL-E dropped, but weren’t directly caused by DALL-E?
For some n=1 data, this describes my situation. I’ve posted about AI safety six times in the last six months despite having posted only once in the four years prior. I’m an undergrad who started working full-time on AI safety six months ago thanks to funding and internship opportunities that I don’t think existed in years past. The developments in AI over the last year haven’t dramatically changed my views. It’s mainly about the growth of career opportunities in alignment for me personally.
Personally I agree with jacob_cannell and Nathan Helm-Burger that I’d prefer an AI-focused site and I’m mainly just distracted by the other stuff. It would be cool if more people could post on the Alignment Forum, but I do appreciate the value of having a site with a high bar that can be shared to outsiders without explaining all the other content on LessWrong. I didn’t know you could adjust karma by tag, but I’ll be using that to prioritize AI content now. I’d encourage anyone who doesn’t want my random linkposts about AI to use the tags as well.
Is this a positive feedback loop where increased AI safety posts lead to people posting more AI safety posts?
This also feels relevant. I share links with a little bit of context when I think some people would find them interesting, even when not everybody will. I don’t want to crowd out other kinds of content, I think it’s been well received so far but I’m open to different norms.
I’m confused by the sudden upsurge in AI content. People in technical AI alignment are there because they already had strong priors that AI capabilities are growing fast. They’re aware of major projects. I doubt DALL-E threw a brick through Paul Christiano’s window, Eliezer Yudkowsky’s window, or John Wentworth’s window. Their window was shattered years ago.
Here are some possible explanations for the proliferation of AI safety content. As a note, I have no competency in AI safety and haven’t read the posts. These are questions, not comments on the quality of these posts!
I am not in AI safety research and have no aptitude or interest in following the technical arguments. I know these articles have other outlets. So for me, these articles are strictly an inconvenience on the website, ignoring the pressingness of the issue for the world at large. I don’t resent that. But I do experience the “inhibition” effect, where I feel skittish about posting non-AI safety content because I don’t see others doing it.
There used to be very strong secrecy norms at MIRI. There was a strategic update on the usefulness of public debate and reducing secrecy.
Everything that’s in the AI alignment forum gets per default also shown on LessWrong. The AI alignment forum is a way to filter out amateur work.
I don’t believe there was a strategic update in favor of reducing secrecy at MIRI. My model is that everything that they said would be secret, is still secret. The increase in public writing is not because it became more promising, but because all their other work became less.
Maybe saying “secrecy” is the wrong way to phrase it. The main point is that MIRI strategy shifted toward more public writing.
I think we’re primarily seeing:
and
For some n=1 data, this describes my situation. I’ve posted about AI safety six times in the last six months despite having posted only once in the four years prior. I’m an undergrad who started working full-time on AI safety six months ago thanks to funding and internship opportunities that I don’t think existed in years past. The developments in AI over the last year haven’t dramatically changed my views. It’s mainly about the growth of career opportunities in alignment for me personally.
Personally I agree with jacob_cannell and Nathan Helm-Burger that I’d prefer an AI-focused site and I’m mainly just distracted by the other stuff. It would be cool if more people could post on the Alignment Forum, but I do appreciate the value of having a site with a high bar that can be shared to outsiders without explaining all the other content on LessWrong. I didn’t know you could adjust karma by tag, but I’ll be using that to prioritize AI content now. I’d encourage anyone who doesn’t want my random linkposts about AI to use the tags as well.
This also feels relevant. I share links with a little bit of context when I think some people would find them interesting, even when not everybody will. I don’t want to crowd out other kinds of content, I think it’s been well received so far but I’m open to different norms.