Financial District, Manhattan, New York, NY, USA
For this Tuesday’s meetup I’m going to do a Q&A session about the AGI and alignment labs in the SF Bay area, and what the whole AI scene is like.
It’s been wild since I moved from NYC, you meet people from all the major companies, learn what their internal politics are like, who the competent teams are and what they think about the dangers of what they’re building. There’s parties with celebrities, venture capitalists, CEOs and journalists, now convinced that AI powerful enough to dramatically change civilization is a dangerous prospect possibly coming in the next few decades.
I’ve now been working for a year on interpretability research trying to understand the internals of language models, so that we can eventually detect failure modes we can’t test for in other ways, while surrounded by all the progress and often knowing about developments before they become known by the general public.
For this meetup I’m thinking I can give a brief intro overview to set the scene for what things are like in the Bay Area right now and what different labs exist doing alignment research and building powerful models, and talk about some of the drama behind that. Then I can open it up to questions about various alignment research agendas and groups, interpretability, the stories I know behind various happenings, and what modern language model investment is like and where I think it’s going.
Bay Area AI & Alignment Scene Q&A
For this Tuesday’s meetup I’m going to do a Q&A session about the AGI and alignment labs in the SF Bay area, and what the whole AI scene is like.
It’s been wild since I moved from NYC, you meet people from all the major companies, learn what their internal politics are like, who the competent teams are and what they think about the dangers of what they’re building. There’s parties with celebrities, venture capitalists, CEOs and journalists, now convinced that AI powerful enough to dramatically change civilization is a dangerous prospect possibly coming in the next few decades.
I’ve now been working for a year on interpretability research trying to understand the internals of language models, so that we can eventually detect failure modes we can’t test for in other ways, while surrounded by all the progress and often knowing about developments before they become known by the general public.
For this meetup I’m thinking I can give a brief intro overview to set the scene for what things are like in the Bay Area right now and what different labs exist doing alignment research and building powerful models, and talk about some of the drama behind that. Then I can open it up to questions about various alignment research agendas and groups, interpretability, the stories I know behind various happenings, and what modern language model investment is like and where I think it’s going.
=== WHEN+WHERE ===
7:00pm Tuesday, May 2nd
The Solarium (Join the mailing list at https://groups.google.com/g/overcomingbiasnyc and say you came from LessWrong to access the post with the address)