IMO, AI safety has the problem that both a lot of the science of AI safety on how to make AIs safe is only partially known (but it has made progress), the evidence base for the AI field, especially on the big questions like deceptive alignment is way smaller than a lot of other fields (for several reasons), combined with your last point about incentives to get AI more powerful by companies.
IMO, AI safety has the problem that both a lot of the science of AI safety on how to make AIs safe is only partially known (but it has made progress), the evidence base for the AI field, especially on the big questions like deceptive alignment is way smaller than a lot of other fields (for several reasons), combined with your last point about incentives to get AI more powerful by companies.
Add them all up, and it’s a tricky problem.