It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.
It’s not adequate to “get governments to mandate that [Friendliness] be implemented in any AI”, because Friendliness is not a robot-building standard—refer the rest of my comment. The statement about government rationality was more tangential, about governments doing anything at all concerning such a strange topic, and wasn’t meant to imply that this particular decision would be rational.