Strongly upvoted. Thanks for your comprehensive review. This might be the best answer I’ve ever received for any question I’ve asked on LW.
In my opinion, given that these other actors who’ve adopted the term are arguably leaders in the field more than MIRI, it’s valid for someone in the rationality community to claim it’s in fact the preferred term. A more accurate statement would be:
There is a general or growing preference for the term AI alignment be used instead of AI safety to refer to the control problem.
There isn’t a complete consensus on this but there may not be a good reason for that and it’s only because there is inertia in the field from years ago when the control problem wasn’t distinguished as often from other ethics or security concerns about advanced AI.
Clarifying all of that by default isn’t necessary but it would be worth mentioning if anyone asks which organizations or researchers beyond MIRI also agree.
Strongly upvoted. Thanks for your comprehensive review. This might be the best answer I’ve ever received for any question I’ve asked on LW.
In my opinion, given that these other actors who’ve adopted the term are arguably leaders in the field more than MIRI, it’s valid for someone in the rationality community to claim it’s in fact the preferred term. A more accurate statement would be:
There is a general or growing preference for the term AI alignment be used instead of AI safety to refer to the control problem.
There isn’t a complete consensus on this but there may not be a good reason for that and it’s only because there is inertia in the field from years ago when the control problem wasn’t distinguished as often from other ethics or security concerns about advanced AI.
Clarifying all of that by default isn’t necessary but it would be worth mentioning if anyone asks which organizations or researchers beyond MIRI also agree.