An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded—rather than it going to one organisation over another.
(ii) Given the number of “non-risk” AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.
There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area—this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.
Consider it seed funding for the whole field of AI safety!
Vika, thank you and all at FLI so much for all you’ve done recently. Three amazing announcements from FLI on each others’ heels, each a gigantic contribution to increasing the chances that we’ll all see a better future. Really extraordinary work.
Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).
An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded—rather than it going to one organisation over another. (ii) Given the number of “non-risk” AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.
There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area—this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.
Consider it seed funding for the whole field of AI safety!
Sean (CSER)
Seconded (as an FLI person)
Vika, thank you and all at FLI so much for all you’ve done recently. Three amazing announcements from FLI on each others’ heels, each a gigantic contribution to increasing the chances that we’ll all see a better future. Really extraordinary work.
Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).