I’m more confident in saying that I don’t think a “warning shot” will suddenly move civilization from ‘massively failing at the AGI alignment problem’ to ‘handling the thing pretty reasonably’. If a warning shot shifts us from a failure trajectory to a success trajectory, I expect that to be because we were already very close to a success trajectory at a time.
I agree with that statement. I don’t expect our civilization to handle anything as hard and tail-ended as the alignment problem reasonably even if it tries.
I agree with that statement. I don’t expect our civilization to handle anything as hard and tail-ended as the alignment problem reasonably even if it tries.
FWIW I object to the title “Yes, AI research will be substantially curtailed if a lab causes a major disaster”; seems too confident.