What is baffling to me is the vague idea that developing the theory of friendliness has any significant synergy with developing the theory of intelligence.
It is not clear what discussions you are referring to—but there is a kind of economic synergy the other way around—machine intelligence builders will need to give humans what they want initially, or their products won’t sell.
So, for example, there are economic incentives for automobile makers to figure out if humans prefer to be puked through windscreens or have airbags exploded in their faces.
To give a more machine-intelligence-oriented example, Android face recognition faces some privacy-related issues before it can be marketed—because it goes close to the “creepy line”. Without a keen appreciation of the values-related issues, it can’t be deployed or marketed.
It is not clear what discussions you are referring to
Yeah, though it seems I’ve seen statements by some people here that “Friendliness IS the AI”; I didn’t understand them at face value due to the same obvious question as the OP.
It is not clear what discussions you are referring to—but there is a kind of economic synergy the other way around—machine intelligence builders will need to give humans what they want initially, or their products won’t sell.
So, for example, there are economic incentives for automobile makers to figure out if humans prefer to be puked through windscreens or have airbags exploded in their faces.
To give a more machine-intelligence-oriented example, Android face recognition faces some privacy-related issues before it can be marketed—because it goes close to the “creepy line”. Without a keen appreciation of the values-related issues, it can’t be deployed or marketed.
Yeah, though it seems I’ve seen statements by some people here that “Friendliness IS the AI”; I didn’t understand them at face value due to the same obvious question as the OP.