It both is and isn’t an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it’s still a topic a lot of people disagree on and write about here and elsewhere.
There’s at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, “Why do some people think ASI would share some resources with humans as a default or likely outcome?” I don’t think that and don’t agree with the arguments I’ve seen put forth for it.
But I don’t expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to “How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?” None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)
It both is and isn’t an entry level question. On the one hand, your expectation matches the expectation LW was founded to shed light on, back when EY was writing The Sequences. On the other hand, it’s still a topic a lot of people disagree on and write about here and elsewhere.
There’s at least two interpretations of your question I can think of, with different answers, from my POV.
What I think you mean is, “Why do some people think ASI would share some resources with humans as a default or likely outcome?” I don’t think that and don’t agree with the arguments I’ve seen put forth for it.
But I don’t expect our future to be terrible, in the most likely case. Part of that is the chance of not getting ASI for one reason or another. But most of that is the chance that we will, by the time we need it, have developed an actually satisfying answer to “How do we get an ASI such that it shares resources with humans in a way we find to be a positive outcome?” None of us has that answer yet. But, somewhere out in mind design space are possible ASIs that value human flourishing in ways we would reflectively endorse and that would be good for us.
Humans as social animals have a strong instinctual bias towards trust of con-specifics in prosperous times. Which makes sense from a game theoretic strengthen-the-tribe perspective. But I think that leaves us, as a collectively dumb mob of naked apes, entirely lacking a sensible level of paranoia in the building ASI that has no existential need for pro-social behavior.
The one salve I have for hopelessness is that perhaps the Universe will be boringly deterministic and ‘samey’ enough that ASI will find it entertaining to have agentic humans wandering around doing their mildly unpredictable thing. Although maybe it will prefer to manufacture higher levels of drama (not good for our happiness)