Are we seeing the fruits of investments in training a cohort of new AI safety researchers that happened to ripen just when DALL-E dropped, but weren’t directly caused by DALL-E?
For some n=1 data, this describes my situation. I’ve posted about AI safety six times in the last six months despite having posted only once in the four years prior. I’m an undergrad who started working full-time on AI safety six months ago thanks to funding and internship opportunities that I don’t think existed in years past. The developments in AI over the last year haven’t dramatically changed my views. It’s mainly about the growth of career opportunities in alignment for me personally.
Personally I agree with jacob_cannell and Nathan Helm-Burger that I’d prefer an AI-focused site and I’m mainly just distracted by the other stuff. It would be cool if more people could post on the Alignment Forum, but I do appreciate the value of having a site with a high bar that can be shared to outsiders without explaining all the other content on LessWrong. I didn’t know you could adjust karma by tag, but I’ll be using that to prioritize AI content now. I’d encourage anyone who doesn’t want my random linkposts about AI to use the tags as well.
Is this a positive feedback loop where increased AI safety posts lead to people posting more AI safety posts?
This also feels relevant. I share links with a little bit of context when I think some people would find them interesting, even when not everybody will. I don’t want to crowd out other kinds of content, I think it’s been well received so far but I’m open to different norms.
For some n=1 data, this describes my situation. I’ve posted about AI safety six times in the last six months despite having posted only once in the four years prior. I’m an undergrad who started working full-time on AI safety six months ago thanks to funding and internship opportunities that I don’t think existed in years past. The developments in AI over the last year haven’t dramatically changed my views. It’s mainly about the growth of career opportunities in alignment for me personally.
Personally I agree with jacob_cannell and Nathan Helm-Burger that I’d prefer an AI-focused site and I’m mainly just distracted by the other stuff. It would be cool if more people could post on the Alignment Forum, but I do appreciate the value of having a site with a high bar that can be shared to outsiders without explaining all the other content on LessWrong. I didn’t know you could adjust karma by tag, but I’ll be using that to prioritize AI content now. I’d encourage anyone who doesn’t want my random linkposts about AI to use the tags as well.
This also feels relevant. I share links with a little bit of context when I think some people would find them interesting, even when not everybody will. I don’t want to crowd out other kinds of content, I think it’s been well received so far but I’m open to different norms.