Unpopular opinion (on this site I guess): AI alignment is not a well defined problem, there is no clear cut resolution to it. It will be an incremental process, similar to cybersecurity research.
About the money, I would do the opposite: select researchers that would do it for free, just pay them living expenses and give them arbitrary resources.
I don’t think that’s an unpopular opinion! That’s just true.
Money-wise, we have a Cassandra problem. No one believes us. So the question is, how can we get everyone competent enough to end the world to realize that they’re about to end the world? Is it possible to use money to do this? Apparently there is money, but not so much time.
Unpopular opinion (on this site I guess): AI alignment is not a well defined problem, there is no clear cut resolution to it. It will be an incremental process, similar to cybersecurity research.
About the money, I would do the opposite: select researchers that would do it for free, just pay them living expenses and give them arbitrary resources.
I don’t think that’s an unpopular opinion! That’s just true.
Money-wise, we have a Cassandra problem. No one believes us. So the question is, how can we get everyone competent enough to end the world to realize that they’re about to end the world? Is it possible to use money to do this? Apparently there is money, but not so much time.
just like parenting