I find one consistent crux I have with people not concerned about AI risk is that they believe massively more resources will be invested into technical safety before AGI is developed.
In the context of these statements, I would put it as something like “The number of people working full-time on technical AI Safety will increase by an order of magnitude by 2030”.
I find one consistent crux I have with people not concerned about AI risk is that they believe massively more resources will be invested into technical safety before AGI is developed.
In the context of these statements, I would put it as something like “The number of people working full-time on technical AI Safety will increase by an order of magnitude by 2030”.
Try by 2024.