Because conditions might change and you might come back to AI alignment research, I want to share some details of what I’ve been doing and how I’ve approach my alignment work. I’ll write this out as a personal story since that seems to be the best fit, and you can pull out what resonates as advice. Some of the details might seem irrelevant at first but I promise I put them there as context that I think is relevant to tying the whole thing together at the end.
So back in 1999 I got a lot more engaged on the Extropians mailing list (like actually reading it rather than leaving them unread in a folder). This led to me joining the SL4 mailing list and then getting really excited about existential risks more generally (since about 1997 I had been reading and thinking a lot about nanotech/APM and its risks). Over the next few years I stayed moderately engaged on SL4 and things that came after it until around 2004-2005. By this point it seemed I just wasn’t cut out for AI alignment research even though I cared a lot, and I mostly gave up on ever being able to contribute anything. I went off to live my life, got married, and worked on a PhD.
I didn’t lose touch with the community. When it started Overcoming Bias went straight into my RSS reader, and then LW later on. I kept up with the goings on of SIAI, the Foresight Institute, and other things.
My life changed directions in 2011. That year I dropped out of my PhD, having lost the spirit to finish it about 2 years earlier such that I failed classes and only worked on my research, because my wife was sick and couldn’t work and I needed a job that paid more. So I started working as a software engineer at a startup. Over the next year or so this changed me: I was making money, I was doing something I was good at, I saw that I kept getting better at it, and it built a lot of confidence. It seemed I could do things.
So in 2012 I finally signed up for cryonics after years of procrastination. I felt good about myself for maybe the first time in my life and I had the money to do it. In 2013 I felt even better about myself and separated from my wife, finally realizing and accepting that I was only with her not because I wanted to be with her but because I didn’t want to be alone. That same year I took a new programming job in the Bay Area.
I continued on this upward trajectory for the next few years, but I didn’t think too hard about doing AI research. I thought my best bet was that maybe I could make a lot of money and use it to fund the work of others. Then in 2017, after a period of thinking real hard and writing about my ideas, I noticed one day that maybe I had some comparative advantage to offer AI alignment research. Maybe not to be a superstar researcher trying to solve the whole thing, but I could say some things and do some work that might be helpful.
So, that’s what I did. AI alignment research is in some sense a “hobby” for me because it’s not what I do full time and I don’t get paid to do it, but at the same time it’s something I make time for, stay up with, and even if I’m not seemingly doing as much as others, I keep at it because it seems to me I’m able to offer something to the field in places that appear neglected to me. Maybe my biggest impact will just be to have been part of the field and have made it bigger and more active so that it had more surface area for others to stay engaged with and find it on their own paths to doing work that has more direct impact, or maybe I’ll eventually stumble on something that is really important, or maybe I already have and we just don’t realize it yet. It’s hard to say.
So I hope this encourages you not to give up on AI alignment research all together. Do what you need to do for yourself, but also I hope you don’t lose your connection to the field. One day you might wake up to realize things have changed or you know something that gives you an unique perspective on the problem that, if nothing else, might get people thinking in ways they weren’t about the problem before and inject useful noise that will help us anneal our way to a good solution. I hope that you keep reading, keep commenting, and one day find you have something you need to say about AI alignment because others need to hear it.
Because conditions might change and you might come back to AI alignment research, I want to share some details of what I’ve been doing and how I’ve approach my alignment work. I’ll write this out as a personal story since that seems to be the best fit, and you can pull out what resonates as advice. Some of the details might seem irrelevant at first but I promise I put them there as context that I think is relevant to tying the whole thing together at the end.
So back in 1999 I got a lot more engaged on the Extropians mailing list (like actually reading it rather than leaving them unread in a folder). This led to me joining the SL4 mailing list and then getting really excited about existential risks more generally (since about 1997 I had been reading and thinking a lot about nanotech/APM and its risks). Over the next few years I stayed moderately engaged on SL4 and things that came after it until around 2004-2005. By this point it seemed I just wasn’t cut out for AI alignment research even though I cared a lot, and I mostly gave up on ever being able to contribute anything. I went off to live my life, got married, and worked on a PhD.
I didn’t lose touch with the community. When it started Overcoming Bias went straight into my RSS reader, and then LW later on. I kept up with the goings on of SIAI, the Foresight Institute, and other things.
My life changed directions in 2011. That year I dropped out of my PhD, having lost the spirit to finish it about 2 years earlier such that I failed classes and only worked on my research, because my wife was sick and couldn’t work and I needed a job that paid more. So I started working as a software engineer at a startup. Over the next year or so this changed me: I was making money, I was doing something I was good at, I saw that I kept getting better at it, and it built a lot of confidence. It seemed I could do things.
So in 2012 I finally signed up for cryonics after years of procrastination. I felt good about myself for maybe the first time in my life and I had the money to do it. In 2013 I felt even better about myself and separated from my wife, finally realizing and accepting that I was only with her not because I wanted to be with her but because I didn’t want to be alone. That same year I took a new programming job in the Bay Area.
I continued on this upward trajectory for the next few years, but I didn’t think too hard about doing AI research. I thought my best bet was that maybe I could make a lot of money and use it to fund the work of others. Then in 2017, after a period of thinking real hard and writing about my ideas, I noticed one day that maybe I had some comparative advantage to offer AI alignment research. Maybe not to be a superstar researcher trying to solve the whole thing, but I could say some things and do some work that might be helpful.
So, that’s what I did. AI alignment research is in some sense a “hobby” for me because it’s not what I do full time and I don’t get paid to do it, but at the same time it’s something I make time for, stay up with, and even if I’m not seemingly doing as much as others, I keep at it because it seems to me I’m able to offer something to the field in places that appear neglected to me. Maybe my biggest impact will just be to have been part of the field and have made it bigger and more active so that it had more surface area for others to stay engaged with and find it on their own paths to doing work that has more direct impact, or maybe I’ll eventually stumble on something that is really important, or maybe I already have and we just don’t realize it yet. It’s hard to say.
So I hope this encourages you not to give up on AI alignment research all together. Do what you need to do for yourself, but also I hope you don’t lose your connection to the field. One day you might wake up to realize things have changed or you know something that gives you an unique perspective on the problem that, if nothing else, might get people thinking in ways they weren’t about the problem before and inject useful noise that will help us anneal our way to a good solution. I hope that you keep reading, keep commenting, and one day find you have something you need to say about AI alignment because others need to hear it.
Thanks for sharing your story and for encouraging me! I will certainly keep in touch with the AI alignment community.