Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.
IMO, the psychological unity of humankind thesis is a case of typical minding/overgeneralizing, combined with overestimating the role of genetics/algorithms and underestimating the role of data in what makes us human.
I basically agree with the game-theoretic perspective, combined with another perspective which suggests that as long as humans are relevant in the economy, you kind of have to help those humans if you want to profit, and merely an AI that automates a lot of work could disrupt it very heavily if a CEO could have perfectly loyal AI workers that never demanded anything in the broader economy.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.
IMO, the psychological unity of humankind thesis is a case of typical minding/overgeneralizing, combined with overestimating the role of genetics/algorithms and underestimating the role of data in what makes us human.
I basically agree with the game-theoretic perspective, combined with another perspective which suggests that as long as humans are relevant in the economy, you kind of have to help those humans if you want to profit, and merely an AI that automates a lot of work could disrupt it very heavily if a CEO could have perfectly loyal AI workers that never demanded anything in the broader economy.