It’d be great if yall could add a regrantor from the Cooperative AI Foundation / FOCAL / CLR / Encultured region of the research/threatmodel space. (epistemic status: conflict of interest since if you do this I could make a more obvious argument for a project)
I’m generally interested in having a diverse range of regrantors; if you’d like to suggest names/make intros (either here, or privately) please let me know!
It’d be great if yall could add a regrantor from the Cooperative AI Foundation / FOCAL / CLR / Encultured region of the research/threatmodel space. (epistemic status: conflict of interest since if you do this I could make a more obvious argument for a project)
I’m generally interested in having a diverse range of regrantors; if you’d like to suggest names/make intros (either here, or privately) please let me know!