It both is and isn’t an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it’s still a topic a lot of people disagree on and write about here and elsewhere.
There’s at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, “Why do some people think ASI would share some resources with humans as a default or likely outcome?” I don’t think that and don’t agree with the arguments I’ve seen put forth for it.
But I don’t expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to “How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?” None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.
IMO, the psychological unity of humankind thesis is a case of typical minding/overgeneralizing, combined with overestimating the role of genetics/algorithms and underestimating the role of data in what makes us human.
I basically agree with the game-theoretic perspective, combined with another perspective which suggests that as long as humans are relevant in the economy, you kind of have to help those humans if you want to profit, and merely an AI that automates a lot of work could disrupt it very heavily if a CEO could have perfectly loyal AI workers that never demanded anything in the broader economy.
That makes sense. Are there any promising developments in the field of AI safety that make you think that we will be able to answer that question by the time we need to?
It’s not my field of expertise, so I have only vague impressions of what is going on, and I certainly wouldn’t recommend anyone else use me as a source.
It both is and isn’t an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it’s still a topic a lot of people disagree on and write about here and elsewhere.
There’s at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, “Why do some people think ASI would share some resources with humans as a default or likely outcome?” I don’t think that and don’t agree with the arguments I’ve seen put forth for it.
But I don’t expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to “How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?” None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
“Game theoretic strengthen-the-tribe perspective” is a completely unpersuasive argument to me. The psychological unity of humankind OTOH is persuasive when combined with the observation that this unitary psychology changes slowly enough that the human mind’s robust capability to predict the behavior of conspecifics (and manage the risks posed by them) can keep up.
IMO, the psychological unity of humankind thesis is a case of typical minding/overgeneralizing, combined with overestimating the role of genetics/algorithms and underestimating the role of data in what makes us human.
I basically agree with the game-theoretic perspective, combined with another perspective which suggests that as long as humans are relevant in the economy, you kind of have to help those humans if you want to profit, and merely an AI that automates a lot of work could disrupt it very heavily if a CEO could have perfectly loyal AI workers that never demanded anything in the broader economy.
That makes sense. Are there any promising developments in the field of AI safety that make you think that we will be able to answer that question by the time we need to?
It’s not my field of expertise, so I have only vague impressions of what is going on, and I certainly wouldn’t recommend anyone else use me as a source.