It’s definitely cruxy in the sense that changing my opinions on any of these would shift my p(doom) some amount.
My rough model is that there’s an unknown quantity about reality which is roughly “how strong does the oversight process have to be before the trained model does what the oversight process intended for it to do”. p(doom) mainly depends on whether the actors training the powerful systems have sufficiently powerful oversight processes. This seems primarily affected by the quality of technical alignment solutions, but certainly civilizational adequacy also affects the answer.
It’s definitely cruxy in the sense that changing my opinions on any of these would shift my p(doom) some amount.
My rough model is that there’s an unknown quantity about reality which is roughly “how strong does the oversight process have to be before the trained model does what the oversight process intended for it to do”. p(doom) mainly depends on whether the actors training the powerful systems have sufficiently powerful oversight processes. This seems primarily affected by the quality of technical alignment solutions, but certainly civilizational adequacy also affects the answer.