As 2018 began, I started thinking about what I should do if I personally take AI seriously. So your post is timely for me. I’ve spent the last couple weeks figuring out how to catch up on the current state of AI development.
What I should do next is still pretty muddy. Or scary.
I have a computer engineering degree and have been a working software developer for several years. I do consider myself a “technical person,” but I haven’t focused on AI before now. I think I could potentially contribute to AI safety research. If I spend some time studying first. I’m not caught up on the technical skills these research guides point to:
80,000 Hours—Career Review of Research Into risks from artificial intelligence—The section “What are some good first steps if you’re interested?” is very relevant.
But I’m also not intimidated by the topics or the prospect of a ton of self-directed study. Self-directed study is my fun. I’ve already started on some of the materials.
The scary stuff is:
I could lose myself for years studying everything in those guides.
I have no network of people to bounce any ideas or plans off of.
I live in the bible belt, and my day-to-day interactions are completely devoid of anyone who would take any of this seriously.
People in the online community (rationality or AI Safety) don’t know I exist, and I’m concerned that spending a lot of time getting noticed is a status game and time sink that doesn’t help me learn about AI as fast as possible.
There’s also a big step of actually reaching out to people in the field. I don’t know how to know when I’m ready or qualified. Or if it’s remotely worth contacting people sooner than later because I’m prone to anxious underconfidence, and I could at least let people know I exist, even if I doubt I’m impressive.
I do feel like one of these specialty CFAR workshops would be a wonderful kick-start, but none are yet listed for 2018.
[Context: I’m Matthew Graves, and currently handle a lot of MIRI’s recruiting, but this is not MIRI’s official view.]
We’re hiring engineers to help with our research program, which doesn’t require extensive familiarity with AI alignment research.
When reading through research guides, it’s better to take a breadth-first approach where you only go deep on the things that are interesting to you, and not worry too much about consuming all of it before you start talking with people about it. Like with software projects, it’s often better to find some open problem that’s interesting and then learn the tools you need to tackle that problem, rather than trying to build a general-purpose toolkit and then find a problem to tackle.
There are some online forums where you can bounce ideas and plans off of; LW is historically a decent place for this, as are Facebook groups like AI Safety Open Discussion. I expect there to be more specialty CFAR workshops this year, but encourage you to get started on stuff now rather than waiting for one. There’s also people like me at MIRI and other orgs who field these sorts of questions. I encourage you to contact us too early instead of too late; the worst-case scenario is that we give you a stock email with links we think will be helpful rather than you eat a huge amount of our time. (For getting detailed reactions to plans, I suspect posting to a group where several different people might have the time to respond to it soon would work better than emailing only one person about it, and it’s considerably easier to answer specific questions than it is to give a general reaction to a plan.)
I think you’re right in that getting additional feedback (bouncing stuff of) is good.
Unfortunately, my rough sense right now is that things are geographically constrained (e.g. there’s stuff happening at CHAI in Berkeley, FHI in Oxford, and DeepMind in London, but not a lot of concentrated work elsewhere.) If you’re in the bible belt, my guess is Roman Yampolskiy is probably the closest (maybe?) person who’s doing lots of stuff in the field.
Speaking from my experience with CFAR (and not in any official capacity whatsoever), I think the AI Fellows tends to be held once a year in the summer/fall (although this might change w/ add’l funding), so that’s maybe also a ways off.
I’d encourage you, though, to reach out to people sooner than later, as you mention. It’s been my experience that people are helpful when you reach out, if you’re genuine about this stuff.
I haven’t reached out to anyone yet, primarily because I imagined that they (Luke, Eliezer, etc) receive many of these kinds of “I’m super excited to help, what can I do?” emails and pattern-matched that onto “annoying person who didn’t read the syllabus”. What has your experience been?
(This is part of what I was going for with the “find a person at a company who’s NOT the highest profile person to get feedback from.”)
Also, one person said to me “I’m generally quite happy to answer succinct, clear questions like ‘I’m considering whether to do X. I’ve thought of considerations N, M, and O. I’m wondering if consideration W is relevant?’”
As opposed to “Hey, what do you think of OpenAI/DeepMind/MIRI?” or other vague (and ‘can’t quite tell if this is an undercover journalist’)” style questions.
As 2018 began, I started thinking about what I should do if I personally take AI seriously. So your post is timely for me. I’ve spent the last couple weeks figuring out how to catch up on the current state of AI development.
What I should do next is still pretty muddy. Or scary.
I have a computer engineering degree and have been a working software developer for several years. I do consider myself a “technical person,” but I haven’t focused on AI before now. I think I could potentially contribute to AI safety research. If I spend some time studying first. I’m not caught up on the technical skills these research guides point to:
MIRI’s Research Guide
80,000 Hours—Career Review of Research Into risks from artificial intelligence—The section “What are some good first steps if you’re interested?” is very relevant.
Bibliography for the Berkeley Center for Human Compatible AI (I had this link saved before reading this post.)
But I’m also not intimidated by the topics or the prospect of a ton of self-directed study. Self-directed study is my fun. I’ve already started on some of the materials.
The scary stuff is:
I could lose myself for years studying everything in those guides.
I have no network of people to bounce any ideas or plans off of.
I live in the bible belt, and my day-to-day interactions are completely devoid of anyone who would take any of this seriously.
People in the online community (rationality or AI Safety) don’t know I exist, and I’m concerned that spending a lot of time getting noticed is a status game and time sink that doesn’t help me learn about AI as fast as possible.
There’s also a big step of actually reaching out to people in the field. I don’t know how to know when I’m ready or qualified. Or if it’s remotely worth contacting people sooner than later because I’m prone to anxious underconfidence, and I could at least let people know I exist, even if I doubt I’m impressive.
I do feel like one of these specialty CFAR workshops would be a wonderful kick-start, but none are yet listed for 2018.
[Context: I’m Matthew Graves, and currently handle a lot of MIRI’s recruiting, but this is not MIRI’s official view.]
We’re hiring engineers to help with our research program, which doesn’t require extensive familiarity with AI alignment research.
When reading through research guides, it’s better to take a breadth-first approach where you only go deep on the things that are interesting to you, and not worry too much about consuming all of it before you start talking with people about it. Like with software projects, it’s often better to find some open problem that’s interesting and then learn the tools you need to tackle that problem, rather than trying to build a general-purpose toolkit and then find a problem to tackle.
There are some online forums where you can bounce ideas and plans off of; LW is historically a decent place for this, as are Facebook groups like AI Safety Open Discussion. I expect there to be more specialty CFAR workshops this year, but encourage you to get started on stuff now rather than waiting for one. There’s also people like me at MIRI and other orgs who field these sorts of questions. I encourage you to contact us too early instead of too late; the worst-case scenario is that we give you a stock email with links we think will be helpful rather than you eat a huge amount of our time. (For getting detailed reactions to plans, I suspect posting to a group where several different people might have the time to respond to it soon would work better than emailing only one person about it, and it’s considerably easier to answer specific questions than it is to give a general reaction to a plan.)
I think you’re right in that getting additional feedback (bouncing stuff of) is good.
Unfortunately, my rough sense right now is that things are geographically constrained (e.g. there’s stuff happening at CHAI in Berkeley, FHI in Oxford, and DeepMind in London, but not a lot of concentrated work elsewhere.) If you’re in the bible belt, my guess is Roman Yampolskiy is probably the closest (maybe?) person who’s doing lots of stuff in the field.
Speaking from my experience with CFAR (and not in any official capacity whatsoever), I think the AI Fellows tends to be held once a year in the summer/fall (although this might change w/ add’l funding), so that’s maybe also a ways off.
I’d encourage you, though, to reach out to people sooner than later, as you mention. It’s been my experience that people are helpful when you reach out, if you’re genuine about this stuff.
I haven’t reached out to anyone yet, primarily because I imagined that they (Luke, Eliezer, etc) receive many of these kinds of “I’m super excited to help, what can I do?” emails and pattern-matched that onto “annoying person who didn’t read the syllabus”. What has your experience been?
(I live in Oregon)
(This is part of what I was going for with the “find a person at a company who’s NOT the highest profile person to get feedback from.”)
Also, one person said to me “I’m generally quite happy to answer succinct, clear questions like ‘I’m considering whether to do X. I’ve thought of considerations N, M, and O. I’m wondering if consideration W is relevant?’”
As opposed to “Hey, what do you think of OpenAI/DeepMind/MIRI?” or other vague (and ‘can’t quite tell if this is an undercover journalist’)” style questions.