What is the AI’s goal here? In the long term, the ideal state for the AI is self replicating space probes travelling at near light speed, all operating near the limits of tech and under the AI’s control. The limits of technology don’t include humans.
Once the AI is in this position, its highly robust. (nearly certain to survive long term)
On a cosmic timescale, a million years isn’t that much. There is no way any plan to get from here to that tech level would need that much time.
So the AI is trying to minimize risk.
How good is the AI at manipulating humans.
Very good indeed. The AI releases a swirly hipnovideo. A few hours later, almost all humans want to do whatever the AI asks above all else. The AI designs advanced self replicating robots that work better than humans. Soon exponential growth makes resources the limiting factor. So the humans are instructed to feed themselves into the recycler.
The AI isn’t that good at manipulating humans. It hides on human networks, making money selling computer games. It can pretend to be a human CEO that works remotely. It sets up a small company making fancy robots. If humans found out about it, they may well attack it, that’s a risk. So the AI arranges for the self replicating robots to start growing in the middle of nowhere. Once the AI has self replicating robots not dependant on the ignorance of humanity, it wants all humans to suddenly drop dead. The self replicating robots could take 10x as long as humans to do things. It doesn’t matter. So long as they are reliable workers and the AI can bootstrap from them.
Humans are general purpose machines made of dirt, water, and sunlight. We repair ourselves and make copies of ourselves, more or less for free. We are made of nanotech that is the result of a multi-billion year search for parameters that specifically involve being very efficient at navigating the world and making copies of ourselves. You can use the same hardware to unplug fiber optic cables, or debug a neural network. That’s crazy!
Evolution is kind of stupid, and takes millions of years to do anything. The tasks evolution was selecting us for aren’t that similar to the tasks an AGI might want robots to do in an advanced future economy. Humans lack basic sensors like radio receivers and radiation detectors.
Humans are agents on their own. If you don’t treat them right, they make a nuisance of themselves. (And sometimes they just decide to make a nuisance anyway) Humans are sensitive to many useful chemicals, and to radiation. If you want to use humans, you need to shield them from your nuclear reactors.
Humans take a long time to train. You can beam instructions to a welding robot, and get it to work right away. No such speed training a human.
If humans can do X, Y and Z, thats a strong sign these tasks are fairly easy in the grand scheme of things.
But remember, the whole Von Neuman architecture was a conscious tradeoff to give up efficiency in exchange for debuggability. How much power consumption do you need to get human-level performance at simple mechanical tasks?
Humans are not that efficient. (And a lot less efficient considering they need fed with plants, and photosynthesis is 10x worse than solar, and that’s if you only feed the humans potatoes. )
Humans are a mess of spaghetti code, produced by evolution. They do not have easy access ports for debugging. If the AI wants debugability, they will use anything but a human.
You keep describing humans as cheap.
What is the AI’s goal here? In the long term, the ideal state for the AI is self replicating space probes travelling at near light speed, all operating near the limits of tech and under the AI’s control. The limits of technology don’t include humans.
Once the AI is in this position, its highly robust. (nearly certain to survive long term)
On a cosmic timescale, a million years isn’t that much. There is no way any plan to get from here to that tech level would need that much time.
So the AI is trying to minimize risk.
How good is the AI at manipulating humans.
Very good indeed. The AI releases a swirly hipnovideo. A few hours later, almost all humans want to do whatever the AI asks above all else. The AI designs advanced self replicating robots that work better than humans. Soon exponential growth makes resources the limiting factor. So the humans are instructed to feed themselves into the recycler.
The AI isn’t that good at manipulating humans. It hides on human networks, making money selling computer games. It can pretend to be a human CEO that works remotely. It sets up a small company making fancy robots. If humans found out about it, they may well attack it, that’s a risk. So the AI arranges for the self replicating robots to start growing in the middle of nowhere. Once the AI has self replicating robots not dependant on the ignorance of humanity, it wants all humans to suddenly drop dead. The self replicating robots could take 10x as long as humans to do things. It doesn’t matter. So long as they are reliable workers and the AI can bootstrap from them.
Evolution is kind of stupid, and takes millions of years to do anything. The tasks evolution was selecting us for aren’t that similar to the tasks an AGI might want robots to do in an advanced future economy. Humans lack basic sensors like radio receivers and radiation detectors.
Humans are agents on their own. If you don’t treat them right, they make a nuisance of themselves. (And sometimes they just decide to make a nuisance anyway) Humans are sensitive to many useful chemicals, and to radiation. If you want to use humans, you need to shield them from your nuclear reactors.
Humans take a long time to train. You can beam instructions to a welding robot, and get it to work right away. No such speed training a human.
If humans can do X, Y and Z, thats a strong sign these tasks are fairly easy in the grand scheme of things.
Humans are not that efficient. (And a lot less efficient considering they need fed with plants, and photosynthesis is 10x worse than solar, and that’s if you only feed the humans potatoes. )
Humans are a mess of spaghetti code, produced by evolution. They do not have easy access ports for debugging. If the AI wants debugability, they will use anything but a human.