If you think you know how to effectively incentivize people to work on AI safety in a way that produces AI alignment but does not increase AI capability buildup speed in a way that increases risk, why don’t you explicitly advocate for a way you think the money could be spent?
Generally, whenever you spent a lot of money you get a lot of side effects and not only that what you want to encourage.
If you think you know how to effectively incentivize people to work on AI safety in a way that produces AI alignment but does not increase AI capability buildup speed in a way that increases risk, why don’t you explicitly advocate for a way you think the money could be spent?
Generally, whenever you spent a lot of money you get a lot of side effects and not only that what you want to encourage.