Announcing AASAA—Accelerating AI Safety Adoption in Academia (and elsewhere)
AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.
A: the missing step from zero to hero
I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.
One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.
Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
I believe plenty of measures can be made to make getting into AI safety more like an “It’s a small world”-ride:
-
Let there be a tested path with signposts along the way to make progress clear and measurable.
-
Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.
-
Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.
B: the giant unrelenting research machine that we don’t use
The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.
I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though this is an assumption that should be checked).
AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance, polish or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting less risk-tolerant people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.
AASAA adresses both of these issues. It has 2 phases:
A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety”
B: Use the MOOC as a proof of concept to convince universities to teach the field
We are bottlenecked for volunteers and ideas. If you’d like to help out, even if just by sharing perspective, fill in this form and I will invite you to the slack and get you involved.
- Why I prioritize moral circle expansion over reducing extinction risk through artificial intelligence alignment by 20 Feb 2018 18:29 UTC; 107 points) (EA Forum;
- 17 Jun 2017 11:32 UTC; 5 points) 's comment on Cognitive Science/Psychology As a Neglected Approach to AI Safety by (EA Forum;
- 9 Jul 2017 10:59 UTC; 2 points) 's comment on My current thoughts on MIRI’s “highly reliable agent design” work by (EA Forum;
- 1 Jul 2017 9:44 UTC; 0 points) 's comment on Open thread, June 26 - July 2, 2017 by (
In addition to generally liking this initiative, I specifically appreciate the article on research debt.
I came to a similar conclusion years ago, but when I tried to communicate it, typical reaction was “just admit that you suck at research”. Full disclosure: I do suck at research. But that perhaps makes it even easier to notice that come of the complexity is essential—the amount, complexity, and relatedness of the ideas—but a lot of it is accidental.
People who excel at doing research are usually not the ones who excel at explaining stuff, so this is what happens by default.
The problem of research debt is indeed huge. But the best explanations I know were written by researchers doing explanation part-time, not dedicated explainers. I think getting researchers involved in teaching is a big part of why universities succeed. Students aren’t vessels to be filled, they are torches to be lit, and you can only light a torch from another torch. (I was lucky to attend a high school where math was taught partly by mathematicians, and it pretty much set me for life.) Maybe MIRI should make researchers spend half of their time writing explanations and rate their popularity.
This is how universities also sometimes get teachers who hate teaching, and who are sometimes very unpleasant to learn from.
Sounds like a false dilemma. Ceteris paribus, wouldn’t getting more knowledge easier be better? The less time and energy you spend on learning X, the more time and energy you can spend on learning or researching Y. Also, having a topic more clearly explained can make it accessible to students at younger age.
Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You’re saying it was nice of toonalfrink to link to it.
(I’m not sure if this comment is useful, since I don’t expect a lot of people to have the same misunderstanding I did.)
Am not associated. Just found the article in the MIRI newsletter
Well, I am grateful both to the person who wrote the article, and the person who brought it to my attention. I didn’t realize originally they may not be the same person or organization.
Any chance it could be called AGI Safety instead of AI safety? I think that getting us to consistently use that terminology would help people to know that we are worrying about something greater than current deep learning systems and other narrow AI (although investigating safety in these systems is a good stepping stone to the AGI work).
I’ll help out how I can. I think these sorts of meta approaches are a great idea!
No-doom-AGI
I really don’t think you should try to convince mid-career professionals to switch careers to AI safety risk research. Instead, you should focus on recruiting talented young people, ideally people who are still in university or at most a few years out.
I agree.
I must admit that the “convince academics” part of the plan is still a bit vague. It’s unclear to me how new fields become fashionable in academia. How does one even figure that out? I’d love to know.
The project focuses on the “create a MOOC” part right now, which is plenty of value in itself.
Added the initiative to the wiki https://wiki.lesswrong.com/wiki/Accelerating_AI_Safety_Adoption_in_Academia
How about having a list of possible AGI safety related topics that could provide material for a bachelor or master thesis?
What about the research agendas that have already been published?
It’s hard to know from the outside which problems are tractable enough to write a bachelor thesis on them.
What’s the evidence that it’s mostly talent-constrained?
As stated here:
But to be fair, that’s november 2015, so let me know if I should update.
I don’t have any special insight.
I imagine that top level talent is hard to get but the amount of Phd students that have enough skills to be able to do Phd research in the area might be higher. As far as I understand the open Phd positions are very competitive but I base my impression on a single conversation.
This looks solid.
Can you go into a bit of detail on the level / spectrum of difficulty of the courses you’re aiming for, and the background knowledge that’ll be expected? I suspect you don’t want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who’s interested to participate meaningfully.
Thank you!
Difficulty/prerequisites is one of the uncertainties that will have to be addressed. Some AI safety only requires algebra skills while other stuff needs logic/ML/RL/category theory/other, and then there is stuff that isn’t formalized at all.
But there are other applied mathematics fields with this problem, and I expect that we can steal a solution by having a look there.