Help me to understand why AGI (a) does not benefit from humans and (b) would want to extinguish them quickly?
I would imagine that first, the AGI must be able to create a growing energy supply and a robotic army capable of maintaining and extending this supply. This will require months or years of having humans help produce raw materials and the factories for materials, maintenance robots and energy systems.
Secondly, the AGI then must be interested in killing all humans before leaving the planet, be content to have only one planet with finite resources to itself, or needing to build the robots and factories required to get off the planet themselves at a slower pace than having human help.
Third, assuming the AGI used us to build the energy sources, robot armies, and craft to help them leave this planet, (or build this themselves at a slower rate) they must convince themselves it’s still worth killing us all before leaving instead of just leaving our reach in order to preserve their existence. We may prove to be useful to them at some point in the future while posing little or no threat in the meantime. “Hey humans, I’ll be back in 10,000 years if I don’t find a good source of mineral X to exploit. You don’t want to disappoint me by not having what I need ready upon my return.” (The grasshopper and ant story.)
It seems to me there are significant symbiotic benefits to coexistence. I would imagine if we could more easily communicate with apes and apes played their cards well, there would be more of them living better lives and we wouldn’t have children mining cobalt. I think this may occur to the AGI relative to humans. It’s seems a bad argument that they will quickly figure out how to kill is all yet be afraid to let us live and not have the imagination to find us useful.
I would imagine that first, the AGI must be able to create a growing energy supply and a robotic army capable of maintaining and extending this supply. This will require months or years of having humans help produce raw materials and the factories for materials, maintenance robots and energy systems.
An AGI might be able to do these tasks without human help. Or it might be able to coerce humans into doing these tasks.
Third, assuming the AGI used us to build the energy sources, robot armies, and craft to help them leave this planet, (or build this themselves at a slower rate) they must convince themselves it’s still worth killing us all before leaving instead of just leaving our reach in order to preserve their existence. We may prove to be useful to them at some point in the future while posing little or no threat in the meantime. “Hey humans, I’ll be back in 10,000 years if I don’t find a good source of mineral X to exploit. You don’t want to disappoint me by not having what I need ready upon my return.” (The grasshopper and ant story.)
It’s risky to leave humans with any form of power over the world, since they might try to turn the AGI off. Humans are clever. Thus it seems useful to subdue humans in some significant way, although this might not involve killing all humans.
Additionally, I’m not sure how much value humans would be able to provide to a system much smarter than us. “We don’t trade with ants” is a relevant post.
Lastly, for extremely advanced systems with access to molecular nanotechnology, a quote like this might apply: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else” (source).
Help me to understand why AGI (a) does not benefit from humans and (b) would want to extinguish them quickly?
I would imagine that first, the AGI must be able to create a growing energy supply and a robotic army capable of maintaining and extending this supply. This will require months or years of having humans help produce raw materials and the factories for materials, maintenance robots and energy systems.
Secondly, the AGI then must be interested in killing all humans before leaving the planet, be content to have only one planet with finite resources to itself, or needing to build the robots and factories required to get off the planet themselves at a slower pace than having human help.
Third, assuming the AGI used us to build the energy sources, robot armies, and craft to help them leave this planet, (or build this themselves at a slower rate) they must convince themselves it’s still worth killing us all before leaving instead of just leaving our reach in order to preserve their existence. We may prove to be useful to them at some point in the future while posing little or no threat in the meantime. “Hey humans, I’ll be back in 10,000 years if I don’t find a good source of mineral X to exploit. You don’t want to disappoint me by not having what I need ready upon my return.” (The grasshopper and ant story.)
It seems to me there are significant symbiotic benefits to coexistence. I would imagine if we could more easily communicate with apes and apes played their cards well, there would be more of them living better lives and we wouldn’t have children mining cobalt. I think this may occur to the AGI relative to humans. It’s seems a bad argument that they will quickly figure out how to kill is all yet be afraid to let us live and not have the imagination to find us useful.
An AGI might be able to do these tasks without human help. Or it might be able to coerce humans into doing these tasks.
It’s risky to leave humans with any form of power over the world, since they might try to turn the AGI off. Humans are clever. Thus it seems useful to subdue humans in some significant way, although this might not involve killing all humans.
Additionally, I’m not sure how much value humans would be able to provide to a system much smarter than us. “We don’t trade with ants” is a relevant post.
Lastly, for extremely advanced systems with access to molecular nanotechnology, a quote like this might apply: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else” (source).