Hi all. In a few hours I’ll be taking over as executive director at MIRI. The LessWrong community has played a key role in MIRI’s history, and I hope to retain and build your support as (with more and more people joining the global conversation about long-term AI risks & benefits) MIRI moves towards the mainstream.
Below I’ve cross-posted my introductory post on the MIRI blog, which went live a few hours ago. The short version is: there are very exciting times ahead, and I’m honored to be here. Many of you already know me in person or through my blog posts, but for those of you who want to get to know me better, I’ll be running an AMA on the effective altruism forum at 3PM Pacific on Thursday June 11th.
I extend to all of you my thanks and appreciation for the support that so many members of this community have given to MIRI throughout the years.
Hello, I’m Nate Soares, and I’m pleased to be taking the reins at MIRI on Monday morning.
For those who don’t know me, I’ve been a research fellow at MIRI for a little over a year now. I attended my first MIRI workshop in December of 2013 while I was still working at Google, and was offered a job soon after. Over the last year, I wrote a dozen papers, half as primary author. Six of those papers were written for the MIRI technical agenda, which we compiled in preparation for the Puerto Rico conference put on by the FLI in January 2015. Our technical agenda is cited extensively in the research priorities document referenced by the open letter that came out of that conference. In addition to the Puerto Rico conference, I attended five other conferences over the course of the year, and gave a talk at three of them. I also put together the MIRI research guide (a resource for students interested in getting involved with AI alignment research), and of course I spent a fair bit of time doing the actual research at workshops, at researcher retreats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.
I’ve always had a natural inclination towards leadership: in the past, I’ve led a F.I.R.S.T. Robotics team, managed two volunteer theaters, served as president of an Entrepreneur’s Club, and co-founded a startup or two. However, this is the first time I’ve taken a professional leadership role, and I’m grateful that I’ll be able to call upon the experience and expertise of the board, of our advisors, and of outgoing executive director Luke Muehlhauser.
MIRI has improved greatly under Luke’s guidance these last few years, and I’m honored to have the opportunity to continue that trend. I’ve spent a lot of time in conversation with Luke over the past few weeks, and he’ll remain a close advisor going forward. He and the management team have spent the last year or so really tightening up the day-to-day operations at MIRI, and I’m excited about all the opportunities we have open to us now.
The last year has been pretty incredible. Discussion of long-term AI risks and benefits has finally hit the mainstream, thanks to the success of Bostrom’s Superintelligence and FLI’s Puerto Rico conference, and due in no small part to years of movement-building and effort made possible by MIRI’s supporters. Over the last year, I’ve forged close connections with our friends at the Future of Humanity Institute, the Future of Life Institute, and the Centre for the Study of Existential Risk, as well as with a number of industry teams and academic groups who are focused on long-term AI research. I’m looking forward to our continued participation in the global conversation about the future of AI. These are exciting times in our field, and MIRI is well-poised to grow and expand. Indeed, one of my top priorities as executive director is to grow the research team.
That project is already well under way. I’m pleased to announce that Jessica Taylor has accepted a full-time position as a MIRI researcher starting in August 2015. We are also hosting a series of summer workshops focused on various technical AI alignment problems, the second of which is just now concluding. Additionally, we are working with the Center for Applied Rationality to put on a summer fellows program designed for people interested in gaining the skills needed for research in the field of AI alignment.
I want to take a moment to extend my heartfelt thanks to all those supporters of MIRI who have brought us to where we are today: We have a slew of opportunities before us, and it’s all thanks to your effort and support these past years. MIRI couldn’t have made it as far as it has without you. Exciting times are ahead, and your continued support will allow us to grow quickly and pursue all the opportunities that the last year opened up.
Finally, in case you want to get to know me a little better, I’ll be answering questions on the effective altruism forum at 3PM Pacific time on Thursday June 11th.
Taking the reins at MIRI
Hi all. In a few hours I’ll be taking over as executive director at MIRI. The LessWrong community has played a key role in MIRI’s history, and I hope to retain and build your support as (with more and more people joining the global conversation about long-term AI risks & benefits) MIRI moves towards the mainstream.
Below I’ve cross-posted my introductory post on the MIRI blog, which went live a few hours ago. The short version is: there are very exciting times ahead, and I’m honored to be here. Many of you already know me in person or through my blog posts, but for those of you who want to get to know me better, I’ll be running an AMA on the effective altruism forum at 3PM Pacific on Thursday June 11th.
I extend to all of you my thanks and appreciation for the support that so many members of this community have given to MIRI throughout the years.
Hello, I’m Nate Soares, and I’m pleased to be taking the reins at MIRI on Monday morning.
For those who don’t know me, I’ve been a research fellow at MIRI for a little over a year now. I attended my first MIRI workshop in December of 2013 while I was still working at Google, and was offered a job soon after. Over the last year, I wrote a dozen papers, half as primary author. Six of those papers were written for the MIRI technical agenda, which we compiled in preparation for the Puerto Rico conference put on by the FLI in January 2015. Our technical agenda is cited extensively in the research priorities document referenced by the open letter that came out of that conference. In addition to the Puerto Rico conference, I attended five other conferences over the course of the year, and gave a talk at three of them. I also put together the MIRI research guide (a resource for students interested in getting involved with AI alignment research), and of course I spent a fair bit of time doing the actual research at workshops, at researcher retreats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.
I’ve always had a natural inclination towards leadership: in the past, I’ve led a F.I.R.S.T. Robotics team, managed two volunteer theaters, served as president of an Entrepreneur’s Club, and co-founded a startup or two. However, this is the first time I’ve taken a professional leadership role, and I’m grateful that I’ll be able to call upon the experience and expertise of the board, of our advisors, and of outgoing executive director Luke Muehlhauser.
MIRI has improved greatly under Luke’s guidance these last few years, and I’m honored to have the opportunity to continue that trend. I’ve spent a lot of time in conversation with Luke over the past few weeks, and he’ll remain a close advisor going forward. He and the management team have spent the last year or so really tightening up the day-to-day operations at MIRI, and I’m excited about all the opportunities we have open to us now.
The last year has been pretty incredible. Discussion of long-term AI risks and benefits has finally hit the mainstream, thanks to the success of Bostrom’s Superintelligence and FLI’s Puerto Rico conference, and due in no small part to years of movement-building and effort made possible by MIRI’s supporters. Over the last year, I’ve forged close connections with our friends at the Future of Humanity Institute, the Future of Life Institute, and the Centre for the Study of Existential Risk, as well as with a number of industry teams and academic groups who are focused on long-term AI research. I’m looking forward to our continued participation in the global conversation about the future of AI. These are exciting times in our field, and MIRI is well-poised to grow and expand. Indeed, one of my top priorities as executive director is to grow the research team.
That project is already well under way. I’m pleased to announce that Jessica Taylor has accepted a full-time position as a MIRI researcher starting in August 2015. We are also hosting a series of summer workshops focused on various technical AI alignment problems, the second of which is just now concluding. Additionally, we are working with the Center for Applied Rationality to put on a summer fellows program designed for people interested in gaining the skills needed for research in the field of AI alignment.
I want to take a moment to extend my heartfelt thanks to all those supporters of MIRI who have brought us to where we are today: We have a slew of opportunities before us, and it’s all thanks to your effort and support these past years. MIRI couldn’t have made it as far as it has without you. Exciting times are ahead, and your continued support will allow us to grow quickly and pursue all the opportunities that the last year opened up.
Finally, in case you want to get to know me a little better, I’ll be answering questions on the effective altruism forum at 3PM Pacific time on Thursday June 11th.
Onwards,