I have heard about the thing where you commit to a $100m reward for any ML or mathmetician who solves alignment, and simultaneously pay 100 top ML and mathmeticians $1m over the course of a year to do nothing but pursue a solution to alignment (pursuing the bounty in the process). Even if all 100 of them fail, you still selected the best 100 out of every mathmetician who applied for those positions, so a large proportion of them might pursue the problem on their own afterwards in pursuit of the ongoing $100 million bounty. One way or another, many of these influential people will be convinced that the problem is significant and tell their friends, or even contract their friends as consultants to help with the problem.
There’s plenty of trust issues, going both ways, but I’m not a grantmaker or lawyer and I think some smart, experienced people could probably figure out how to mitigate most of them.
I have heard about the thing where you commit to a $100m reward for any ML or mathmetician who solves alignment, and simultaneously pay 100 top ML and mathmeticians $1m over the course of a year to do nothing but pursue a solution to alignment (pursuing the bounty in the process). Even if all 100 of them fail, you still selected the best 100 out of every mathmetician who applied for those positions, so a large proportion of them might pursue the problem on their own afterwards in pursuit of the ongoing $100 million bounty. One way or another, many of these influential people will be convinced that the problem is significant and tell their friends, or even contract their friends as consultants to help with the problem.
There’s plenty of trust issues, going both ways, but I’m not a grantmaker or lawyer and I think some smart, experienced people could probably figure out how to mitigate most of them.