Grow the Cognitive Surplus-Powered AI Risks Research Community
Luke already identified funding Friendliness-related academic papers as a way to purchase AI risk reduction. But being an academic is not a necessary condition for producing useful research.
There already seems to be a community of amateurs who think about decision theory issues and other issues related to Friendliness in their spare time on Less Wrong. Some simple ideas that might grow/encourage this community: Offer cash prizes for useful insights, or feature useful insights on the Singularity Institute blog. Respond to the work of amateurs, as a way of recognizing its legitimacy. Improve recommended reading lists for understanding/contributing to FAI related topics and increase their visibility. Sponsor a “FAI programmer wannabe” mailing list/reading group.
Offer cash prizes for useful insights, or feature useful insights on the Singularity Institute blog. Respond to the work of amateurs, as a way of recognizing its legitimacy. Improve recommended reading lists for understanding/contributing to FAI related topics and increase their visibility. Sponsor a “FAI programmer wannabe” mailing list/reading group.
I like your later suggestions much more than your first. We already have a supply of interested people—enabling people will probably have much more bang/buck than rewarding people. (And of course for those who haven’t seen that video)
Grow the Cognitive Surplus-Powered AI Risks Research Community
Luke already identified funding Friendliness-related academic papers as a way to purchase AI risk reduction. But being an academic is not a necessary condition for producing useful research.
There already seems to be a community of amateurs who think about decision theory issues and other issues related to Friendliness in their spare time on Less Wrong. Some simple ideas that might grow/encourage this community: Offer cash prizes for useful insights, or feature useful insights on the Singularity Institute blog. Respond to the work of amateurs, as a way of recognizing its legitimacy. Improve recommended reading lists for understanding/contributing to FAI related topics and increase their visibility. Sponsor a “FAI programmer wannabe” mailing list/reading group.
I like your later suggestions much more than your first. We already have a supply of interested people—enabling people will probably have much more bang/buck than rewarding people. (And of course for those who haven’t seen that video)