While I wouldn’t accept that level of risk-aversion, I do agree with a related question: Why do they think they can make significant progress on alignment, exactly?
There is a school of thought that says you need to mathematically prove that your AGI will be aligned, before you even start building any kind of AI system at all. IMO this would be a great approach if our civilization had strong coordination abilities and unlimited time.
While I wouldn’t accept that level of risk-aversion, I do agree with a related question: Why do they think they can make significant progress on alignment, exactly?
I mean, I would be glad to hear any number.
Why not? They are running experiments and getting real hands-on experience with AI systems that keep getting better. Seems to me a plausible approach.
There is a school of thought that says you need to mathematically prove that your AGI will be aligned, before you even start building any kind of AI system at all. IMO this would be a great approach if our civilization had strong coordination abilities and unlimited time.