Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?
An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded—rather than it going to one organisation over another.
(ii) Given the number of “non-risk” AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.
There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area—this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.
Consider it seed funding for the whole field of AI safety!
Vika, thank you and all at FLI so much for all you’ve done recently. Three amazing announcements from FLI on each others’ heels, each a gigantic contribution to increasing the chances that we’ll all see a better future. Really extraordinary work.
Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).
My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I’m counting 3685 signatures, which is quite impressive… does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don’t want to leave them for a fledgling new institute.)
Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?
An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded—rather than it going to one organisation over another. (ii) Given the number of “non-risk” AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.
There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area—this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.
Consider it seed funding for the whole field of AI safety!
Sean (CSER)
Seconded (as an FLI person)
Vika, thank you and all at FLI so much for all you’ve done recently. Three amazing announcements from FLI on each others’ heels, each a gigantic contribution to increasing the chances that we’ll all see a better future. Really extraordinary work.
Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).
My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I’m counting 3685 signatures, which is quite impressive… does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don’t want to leave them for a fledgling new institute.)
Most of the signatures came in after Elon Musk tweeted about the open letter.