An AI safety PhD advisory service that helps prospective PhD students choose a supervisor and topic (similar to Effective Thesis, but specialized for AI safety).
Initiatives to identify and develop “Connectors” outside of academia (e.g., a reboot of the Refine program, well-scoped contests, long-term mentoring and peer-support programs).
Physical community spaces for AI safety in AI hubs outside of the SF BayArea or London (e.g., Japan, France, Bangalore).
“Physical community spaces for AI safety in AI hubs outside of the SF Bay Area or London (e.g., Japan, France, Bangalore)”- I love this initiative. Can we also consider Australia or New Zealand in the upcoming proposal?
In theory, sure! I know @yanni kyriacos recently assessed the need for an ANZ AI safety hub, but I think he concluded there wasn’t enough of a need yet?
I am a Manifund Regrantor. In addition to general grantmaking, I have requests for proposals in the following areas:
Funding for AI safety PhDs (e.g., with these supervisors), particularly in exploratory research connecting AI safety theory with empirical ML research.
An AI safety PhD advisory service that helps prospective PhD students choose a supervisor and topic (similar to Effective Thesis, but specialized for AI safety).
Initiatives to critically examine current AI safety macrostrategy (e.g., as articulated by Holden Karnofsky) like the Open Philanthropy AI Worldviews Contest and Future Fund Worldview Prize.
Initiatives to identify and develop “Connectors” outside of academia (e.g., a reboot of the Refine program, well-scoped contests, long-term mentoring and peer-support programs).
Physical community spaces for AI safety in AI hubs outside of the SF Bay Area or London (e.g., Japan, France, Bangalore).
Start-up incubators for projects, including evals/red-teaming/interp companies, that aim to benefit AI safety, like Catalyze Impact, Future of Life Foundation, and YCombinator’s request for Explainable AI start-ups.
Initiatives to develop and publish expert consensus on AI safety macrostrategy cruxes, such as the Existential Persuasion Tournament and 2023 Expert Survey on Progress in AI (e.g., via the Delphi method, interviews, surveys, etc.).
Ethics/prioritization research into:
What values to instill in artificial superintelligence?
How should AI-generated wealth be distributed?
What should people do in a post-labor society?
What level of surveillance/restriction is justified by the Unilateralist’s Curse?
What moral personhood will digital minds have?
How should nations share decision making power regarding transformative AI?
New nonprofit startups that aim to benefit AI safety.
“Physical community spaces for AI safety in AI hubs outside of the SF Bay Area or London (e.g., Japan, France, Bangalore)”- I love this initiative. Can we also consider Australia or New Zealand in the upcoming proposal?
In theory, sure! I know @yanni kyriacos recently assessed the need for an ANZ AI safety hub, but I think he concluded there wasn’t enough of a need yet?
Hi! I think in Sydney we’re ~ 3 seats short of critical mass, so I am going to reassess the viability of a community space in 5-6 months :)