I recommend making it clearer that CAIF is not focused on s-risk and is not formally affiliated with CLR (except for overlap in personnel). While it’s true that there is significant overlap in CLR’s and CAIF’s research interests, CAIF’s mission is much broader than CLR’s (“improve the cooperative intelligence of advanced AI for the benefit of all”), and its founders + leadership are motivated by a variety of catastrophic risks from AI.
Also, “foundational game theory research” isn’t an accurate description of CAIF’s scope. CAIF is interested in a variety of fields relevant to the cooperative intelligence of advanced AI systems. While this includes game theory and decision theory, I expect that a majority of CAIF’s resources (measured in both grants and staff time) will be directed at machine learning, and that we’ll also support work from the social and natural sciences. Also see Open Problems in Cooperative AI and CAIF’s recent call for proposals for a better sense of the kinds of work we want to support.
[ETA] I don’t think “foundational game theory research” is an accurate description of CLR’s scope, either, though I understand how public writing could give that impression. It is true that several CLR researchers have worked and are currently working on foundational game & decision theory research. But people work on a variety of things. Much of our recent technical and strategic work on cooperation is grounded in more prosaic models of AI (though to be fair much of this is not yet public; there are some forthcoming posts that hopefully make this clearer, which I can link back to when they’re up.) Other topics include risks from malevolent actors and AI forecasting.
[Edit 14⁄9] Some of these “forthcoming posts” are up now.
[I work at CAIF and CLR]
Thanks for this!
I recommend making it clearer that CAIF is not focused on s-risk and is not formally affiliated with CLR (except for overlap in personnel). While it’s true that there is significant overlap in CLR’s and CAIF’s research interests, CAIF’s mission is much broader than CLR’s (“improve the cooperative intelligence of advanced AI for the benefit of all”), and its founders + leadership are motivated by a variety of catastrophic risks from AI.
Also, “foundational game theory research” isn’t an accurate description of CAIF’s scope. CAIF is interested in a variety of fields relevant to the cooperative intelligence of advanced AI systems. While this includes game theory and decision theory, I expect that a majority of CAIF’s resources (measured in both grants and staff time) will be directed at machine learning, and that we’ll also support work from the social and natural sciences. Also see Open Problems in Cooperative AI and CAIF’s recent call for proposals for a better sense of the kinds of work we want to support.
[ETA] I don’t think “foundational game theory research” is an accurate description of CLR’s scope, either, though I understand how public writing could give that impression. It is true that several CLR researchers have worked and are currently working on foundational game & decision theory research. But people work on a variety of things. Much of our recent technical and strategic work on cooperation is grounded in more prosaic models of AI (though to be fair much of this is not yet public; there are some forthcoming posts that hopefully make this clearer, which I can link back to when they’re up.) Other topics include risks from malevolent actors and AI forecasting.
[Edit 14⁄9] Some of these “forthcoming posts” are up now.
Thanks for the update! We’ve edited the section on CLR to reflect this comment, let us know if it still looks inaccurate.