It also seems to me like it’s easier to find resources on ML stuff (since it’s an actual existing field) moreso than the agent-foundations paradigm. (I’m not very confident in this, maybe safety-oriented ML is just as ambiguous as agent-foundations?)
It also seems like this is a pretty hard thing to do well in the first place, and probably benefits from do-one-thing-well. If Safety-Oriented ML needs better introductory texts too, I’d guess it’s better for someone(s) with the relevant skills to focus more deeply on that, and for toon/erik et al to focus on iterating on their current project for awhile.
There are a lot of resources on logic, computability and complexity theory, provability, etc. which underly Agent Foundations, but not much on how to synthesize them all into Agent Foundations. Similarly there are a lot of resources on deep learning, reinforcement learning, etc. but not much on how to use them to answer important safety questions. In both Agent Foundations and safety-oriented ML, it seems to me that the hard part with no good resources yet is how to figure out what the right question to ask is. So I’m not sure which one would be easier to teach. (Though RAISE seems to be targeting the underlying background knowledge, not the “figure out what to ask” question.)
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Plausibly an undergraduate math degree would be enough? Agent Foundations is often over my head, but it’s only a little over my head—I definitely have the sense that I could understand the background with not much effort.
Fwiw I also think an undergraduate CS degree is enough to get a good background for safety-oriented ML, as long as you make sure to focus on ML.
The use of the graduate degree is more in training you to do research than to understand any particular background knowledge well.
(Also I’m not claiming that RAISE is not filling a niche—it seems very plausible to me that there are people who would like to work on Agent Foundations who are currently working professionals and not at college, and for them a curated set of resources would be valuable.)
I don’t think most places have enough ML courses at the undergraduate level; I’d expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.
I should clarify that we only intend to design pathways that either complement/augment/supplement existing resources. As you rightly point out, safety-oriented ML is more saturated with much better material from experts. I don’t plan to replace that, only to provide a pathway through it some day. I see my role at least as providing tools for someone to verify their knowledge, to challenge it, and to never be lost about where they need to go next.
I have the analogy in mind of a mountain range and different approaches to guiding people through that mountain range. Right now, the state of learning about agent-foundations feels something like “hey here’s a map to the whole range. Go get to the top of those mountains. Good luck.” I would like it to be something like “here’s the trails of least resistance given your skill set and background to get to the top of those mountains”.
It also seems to me like it’s easier to find resources on ML stuff (since it’s an actual existing field) moreso than the agent-foundations paradigm. (I’m not very confident in this, maybe safety-oriented ML is just as ambiguous as agent-foundations?)
It also seems like this is a pretty hard thing to do well in the first place, and probably benefits from do-one-thing-well. If Safety-Oriented ML needs better introductory texts too, I’d guess it’s better for someone(s) with the relevant skills to focus more deeply on that, and for toon/erik et al to focus on iterating on their current project for awhile.
There are a lot of resources on logic, computability and complexity theory, provability, etc. which underly Agent Foundations, but not much on how to synthesize them all into Agent Foundations. Similarly there are a lot of resources on deep learning, reinforcement learning, etc. but not much on how to use them to answer important safety questions. In both Agent Foundations and safety-oriented ML, it seems to me that the hard part with no good resources yet is how to figure out what the right question to ask is. So I’m not sure which one would be easier to teach. (Though RAISE seems to be targeting the underlying background knowledge, not the “figure out what to ask” question.)
I do agree with the do-one-thing-well intuition.
Yeah that makes sense
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Plausibly an undergraduate math degree would be enough? Agent Foundations is often over my head, but it’s only a little over my head—I definitely have the sense that I could understand the background with not much effort.
Fwiw I also think an undergraduate CS degree is enough to get a good background for safety-oriented ML, as long as you make sure to focus on ML.
The use of the graduate degree is more in training you to do research than to understand any particular background knowledge well.
(Also I’m not claiming that RAISE is not filling a niche—it seems very plausible to me that there are people who would like to work on Agent Foundations who are currently working professionals and not at college, and for them a curated set of resources would be valuable.)
I don’t think most places have enough ML courses at the undergraduate level; I’d expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.
I should clarify that we only intend to design pathways that either complement/augment/supplement existing resources. As you rightly point out, safety-oriented ML is more saturated with much better material from experts. I don’t plan to replace that, only to provide a pathway through it some day. I see my role at least as providing tools for someone to verify their knowledge, to challenge it, and to never be lost about where they need to go next.
I have the analogy in mind of a mountain range and different approaches to guiding people through that mountain range. Right now, the state of learning about agent-foundations feels something like “hey here’s a map to the whole range. Go get to the top of those mountains. Good luck.” I would like it to be something like “here’s the trails of least resistance given your skill set and background to get to the top of those mountains”.