This seems way overdetermined. For example, AI labs have proven extremely successful at spending arbitrary amounts of money to increase capabilities (<-> scaling laws), and there’s been no similar ability to convert arbitrary amounts of money into progress on alignment.
This seems way overdetermined. For example, AI labs have proven extremely successful at spending arbitrary amounts of money to increase capabilities (<-> scaling laws), and there’s been no similar ability to convert arbitrary amounts of money into progress on alignment.