Soon: a weekly AI Safety prerequisites module on LessWrong
(edit: we have a study group running a week ahead of this series that adds important content. It turns out that to get that content ready on a weekly basis, we would have to cut corners. We prefer quality over speed. We also like predictability. So we decided to cut us some slack and publish every 2 weeks instead for the time being)
Hey there! It’s been about 6 weeks since RAISE doubled down on it’s mission to make learning AI Safety as convenient as possible.
We’ve been geared towards STEM majors, but the grand vision was to eventually lay out a learning path that any high-school graduate could take.
So a prerequisites track was on our wish list. Little did we know that such a track had already been constructed, and abandoned, in 2015.
We met it’s creators, Erik Istre and Trent Fowler, and we decided to collaborate. There is already at least 20 weeks of content to work with, and they’re going to extend it further. Many thanks to them!
For what it’s worth: the track was shown to various leading figures in AIS, and the reception has thus far been uniformly positive. To get a sneak peek, register on our platform and have a look at the column called “Prerequisites: Fundamentals of Formalization”. The first two levels are already uploaded.
A module will be published every Friday, starting with “level 1: Basic Logic” on May 4th. Let’s get some momentum going here! If you complete the track in it’s entirety, you should be ready to understand most of the work in AI Safety.
Each module is a set of references to textbooks explaining important topics like Logic, Set theory, Probability and Computability theory. The intention is to 80⁄20 a bachelor’s degree: by covering 20% of the material, you should learn 80% of the relevant concepts. At the end of each module, we made some exercises of our own. Those are made not for practice, but to validate your knowledge. If you think you already know a subject, you can use these to verify it.
All but 2 of the quoted textbooks are available online for free. The other ones will be announced on time. You won’t need them before week 3.
We hope that this will help some of you learn AI Safety!
Warm regards,
Toon, Veerle, Johannes, Remmelt, Ofer
The RAISE team
PS: If you’d like to generally stay up to date with RAISE, join our Facebook group or visit our website.
- RAISE AI Safety prerequisites map entirely in one post by 17 Jul 2019 8:57 UTC; 18 points) (
- The Alignment Newsletter #5: 05/07/18 by 7 May 2018 16:00 UTC; 8 points) (
- Fundamentals of Formalisation level 1: Basic Logic by 4 May 2018 13:01 UTC; 8 points) (
- 11 Oct 2018 21:45 UTC; 3 points) 's comment on We can all be high status by (
Is this specific to MIRI or would it also include the work done by OpenAI, DeepMind, FHI, and CHAI? I don’t see any resources on machine learning currently but perhaps you intend to add those later.
What’s currently present is mostly a sketch of a part of what we intend to do with it. We do eventually plan to extend into machine learning as well.
The limitation at the time was that my academic background was purely in foundations of mathematics research, and so the MIRI approach was a more natural starting point. I am working on remedying these gaps in my knowledge :)
It also seems to me like it’s easier to find resources on ML stuff (since it’s an actual existing field) moreso than the agent-foundations paradigm. (I’m not very confident in this, maybe safety-oriented ML is just as ambiguous as agent-foundations?)
It also seems like this is a pretty hard thing to do well in the first place, and probably benefits from do-one-thing-well. If Safety-Oriented ML needs better introductory texts too, I’d guess it’s better for someone(s) with the relevant skills to focus more deeply on that, and for toon/erik et al to focus on iterating on their current project for awhile.
There are a lot of resources on logic, computability and complexity theory, provability, etc. which underly Agent Foundations, but not much on how to synthesize them all into Agent Foundations. Similarly there are a lot of resources on deep learning, reinforcement learning, etc. but not much on how to use them to answer important safety questions. In both Agent Foundations and safety-oriented ML, it seems to me that the hard part with no good resources yet is how to figure out what the right question to ask is. So I’m not sure which one would be easier to teach. (Though RAISE seems to be targeting the underlying background knowledge, not the “figure out what to ask” question.)
I do agree with the do-one-thing-well intuition.
Yeah that makes sense
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Plausibly an undergraduate math degree would be enough? Agent Foundations is often over my head, but it’s only a little over my head—I definitely have the sense that I could understand the background with not much effort.
Fwiw I also think an undergraduate CS degree is enough to get a good background for safety-oriented ML, as long as you make sure to focus on ML.
The use of the graduate degree is more in training you to do research than to understand any particular background knowledge well.
(Also I’m not claiming that RAISE is not filling a niche—it seems very plausible to me that there are people who would like to work on Agent Foundations who are currently working professionals and not at college, and for them a curated set of resources would be valuable.)
I don’t think most places have enough ML courses at the undergraduate level; I’d expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.
I should clarify that we only intend to design pathways that either complement/augment/supplement existing resources. As you rightly point out, safety-oriented ML is more saturated with much better material from experts. I don’t plan to replace that, only to provide a pathway through it some day. I see my role at least as providing tools for someone to verify their knowledge, to challenge it, and to never be lost about where they need to go next.
I have the analogy in mind of a mountain range and different approaches to guiding people through that mountain range. Right now, the state of learning about agent-foundations feels something like “hey here’s a map to the whole range. Go get to the top of those mountains. Good luck.” I would like it to be something like “here’s the trails of least resistance given your skill set and background to get to the top of those mountains”.
Thanks again for doing this!