I don’t think improving aimability helps guard against misuse.
I think needs to be stated more clearly: Alignment and Misuse are very different things, so much so that what policies and research work for one problem will often not work on another problem, and the worlds of misuse and misalignment are quite different.
Though note that the solutions for misuse focused worlds and structural risk focused worlds can work against each other.
Also, this is validating JDP’s prediction that people will focus less on alignment and more on misuse in their threat models of AI risk.
I think needs to be stated more clearly: Alignment and Misuse are very different things, so much so that what policies and research work for one problem will often not work on another problem, and the worlds of misuse and misalignment are quite different.
Though note that the solutions for misuse focused worlds and structural risk focused worlds can work against each other.
Also, this is validating JDP’s prediction that people will focus less on alignment and more on misuse in their threat models of AI risk.