There are a lot of resources on logic, computability and complexity theory, provability, etc. which underly Agent Foundations, but not much on how to synthesize them all into Agent Foundations. Similarly there are a lot of resources on deep learning, reinforcement learning, etc. but not much on how to use them to answer important safety questions. In both Agent Foundations and safety-oriented ML, it seems to me that the hard part with no good resources yet is how to figure out what the right question to ask is. So I’m not sure which one would be easier to teach. (Though RAISE seems to be targeting the underlying background knowledge, not the “figure out what to ask” question.)
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Plausibly an undergraduate math degree would be enough? Agent Foundations is often over my head, but it’s only a little over my head—I definitely have the sense that I could understand the background with not much effort.
Fwiw I also think an undergraduate CS degree is enough to get a good background for safety-oriented ML, as long as you make sure to focus on ML.
The use of the graduate degree is more in training you to do research than to understand any particular background knowledge well.
(Also I’m not claiming that RAISE is not filling a niche—it seems very plausible to me that there are people who would like to work on Agent Foundations who are currently working professionals and not at college, and for them a curated set of resources would be valuable.)
I don’t think most places have enough ML courses at the undergraduate level; I’d expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.
There are a lot of resources on logic, computability and complexity theory, provability, etc. which underly Agent Foundations, but not much on how to synthesize them all into Agent Foundations. Similarly there are a lot of resources on deep learning, reinforcement learning, etc. but not much on how to use them to answer important safety questions. In both Agent Foundations and safety-oriented ML, it seems to me that the hard part with no good resources yet is how to figure out what the right question to ask is. So I’m not sure which one would be easier to teach. (Though RAISE seems to be targeting the underlying background knowledge, not the “figure out what to ask” question.)
I do agree with the do-one-thing-well intuition.
Yeah that makes sense
There are lots of graduate ML programs that will give you ML background (although that might not be the most efficient route; e.g. compare with Google Brain Residency).
Is there a clear academic path towards getting a good background for AF? Maybe mathematical logic? RAISE might be filling that niche?
Plausibly an undergraduate math degree would be enough? Agent Foundations is often over my head, but it’s only a little over my head—I definitely have the sense that I could understand the background with not much effort.
Fwiw I also think an undergraduate CS degree is enough to get a good background for safety-oriented ML, as long as you make sure to focus on ML.
The use of the graduate degree is more in training you to do research than to understand any particular background knowledge well.
(Also I’m not claiming that RAISE is not filling a niche—it seems very plausible to me that there are people who would like to work on Agent Foundations who are currently working professionals and not at college, and for them a curated set of resources would be valuable.)
I don’t think most places have enough ML courses at the undergraduate level; I’d expect 0-2 undergraduate ML courses at a typical large or technically focused university. OFC, you can often take graduate courses as an undergraduate as well.