However it seems virtually certain to me that we will figure out a significant amount about designing AIs to do what we want in the process of developing them.
Significant is not the same as sufficient. How low do you think the probability of negative AI outcomes is, and what are your reasons for being confident in that estimate?
Significant is not the same as sufficient. How low do you think the probability of negative AI outcomes is, and what are your reasons for being confident in that estimate?