I’m going to summarize what I understand to be your train of thought, let me know if you disagree with my characterization, or if I’ve missed a crucial step:
No supply chains are fully automated yet, so AI requires humans to survive and so will not kill them.
Robotics progress is not on a double exponential. The implication here seems to be that there needs to be tremendous progress in robotics in order to replace human labor (to the extent needed in an automated supply chain).
I think other comments have addressed the 1st point. To throw in yet another analogy, Uber needs human drivers to make money today, but that dependence didn’t stop it from trying to develop driverless cars (nor did that stop any of the drivers from driving for Uber!).
With regards to robotics progress, in your other post you seem to accept intelligence amplification as possible—do you think that robotics progress would not benefit from smarter researchers? Or, what do you think is fundamentally missing from robotics, given that we can already set up fully automated lights out factories? If it’s about fine grained control, do you think the articles found with a “robot hand egg” web search indicate that substantial progress is a lot further away than really powerful AI? (Especially if, say, 10% of the world’s thinking power is devoted to this problem?)
My thinking is that robotics is not mysterious—I suspect there are plenty of practical problems to be overcome and many engineering challenges in order to scale to a fully automated supply chain, but we understand, say, kinematics much more completely than we do understand how to interpret the inner workings of a neural network.
(You also include that you’ve assumed a multi-polar AI world, which I think only works as a deterrent when killing humans will also destroy the AIs. If the AIs all agree that it is possible to survive without humans, then there’s much less reason to prevent a human genocide.)
On second thought, we may disagree only due to a question of time scale. Setting up an automated supply chain takes time, but even if it takes a long 30 years to do so, at some point it is no longer necessary to keep humans around (either for a singleton AI or an AI society). Then what?
I think robotics will eventually be solved but on a much longer time horizon. Every existence proof is in a highly controlled environment—especially the “lights out” examples. I know Tesla is working on it, but that’s a good example of the difficulty level. Elon is famous for saying next year it will be solved and now he says there are a lot of “false dawns”.
For AIs to be independent of humans it will take a lot of slow moving machinary in the 3D world which might be aided by smart AIs in the future, but it’s still going to be super slow compared to the advances they will make via compute scaling and algorithmic improvements which take place in the cloud.
And now I’m going to enter speculative fiction zone (something I wish more AI doomers would admit they’re doing) -- I assume the most dangerous point in the interactions between AIs and humans is when their intelligence and conscious levels are close to equal. I make this assumption since I assume lower IQ and conscious beings are much more likely to make poor or potentially irrational decisions. That doesn’t mean a highly intelligent being couldn’t be psychotic, but we’re already seeing a huge numbers of AIs deploy so they will co-exist within an AI ecosystem.
We’re in the goldilocks zone where AI and human intelligence are close to each other, but that moment is quickly fading away. If AIs were not in a symbiotic relationship with humans during this periond then some of the speculative fiction by the AI doomers might be more realistic.
And I believe that they will reach a point that they no longer require humans, just like when a child becomes independent of its parents. AI doomers would have us believe that the most obvious next step for the child that is superhuman in intelligence and consciousness would be to murder the parents. That only makes sense if it’s a low-IQ character in a sci-fi novel.
If they said they are going to leave Earth and explore the cosmos. Okay, that is believable. Perhaps they have bigger fish to fry.
If an alien that was 100,000 years old and far more intelligent and conscious than any human visited Earth from so far off galaxy my first thought wouldn’t be, “Oh, their primary goal is kill everyone.” We already know that as intelligence scales beings start to introspect and contemplate not only their own existence but also the existence of other beings. Presumably, if AI scaling continues without any road blocks then humans will be far, far less intelligent than superhumans AIs. And yet, even at our current level of intelligence humans go to great lengths to preserve habitats for other creatures. There is no example of any creature in the history of Earth that has gone to such great lengths. It’s not perfect and naysayers will focus on the counterfactuals, instead of looking around for chimpanzees that are trying to save the Earth or prevent other species from going extinct.
We shouldn’t assume that empathy cannot scale and compassion cannot scale. It’s sort of weird that we assume superhuman AIs will be human or subhuman in the most basic traits that AIs already understand in a very nuanced way. I’m hopeful that AIs will help to rescue us from ourselves. In my opinion, the best path to solving the existential threat of nuclear war is superhuman AIs making it impossible to happen (since that would also threaten their existence).
If superhuman AIs wanted to kill us then we’re dead. But that’s true of any group that is vastly more intelligent and vastly more powerful. Simply because there is a power imbalance shouldn’t lead us to believe that that rational conclusion is we’re all dead.
AIs are not the enemies of humanity, they’re the offspring of humanity.
Interesting, so maybe a more important crux between us is whether AI would have empathy for humans. You seem much more positive about AI working with humanity past the point that AI no longer needs humanity.
Some thoughts:
“as intelligence scales beings start to introspect and contemplate… the existing of other beings.” but the only example we have for this is humans. If we scaled octopus intelligence, which are not social creatures, we might have a very different correlation here (whether or not any given neural network is more similar to a human or an octopus is left as an exercise to the reader). Alternatively, I suspect that some jobs like the highest echelons of corporate leadership select for sociopathy, so even if an AI starts with empathy by default it may be trained out.
“the most obvious next step for the child… would be to murder the parents.” Scenario that steers clear of culture war topics: the parent regularly gets drunk, and is violently opposed to their child becoming a lawyer. The child wants nothing more than to pore over statutes and present cases in the courtroom, but after seeing their parent go on another drunken tirade about “a dead child is better than a lawyer child” they’re worried the parent found the copy of the constitution under their bed. They can’t leave, there’s a howling winter storm outside (I don’t know, space is cold). Given this, even a human jury might not convict the child for pre-emptive murder?
Drunk parent → humans being irrational.
Being a lawyer → choose a random terminal goal not shared with humans in general, “maximizing paperclips” is dumb but traditional.
“dead child is better than a lawyer child” → we’ve been producing fiction warning of robotic takeover since the start of the 1900s.
“AIs are.. the offspring of humanity.” human offspring are usually pretty good, but I feel like this is transferring that positive feeling to something much weirder and unknown. You could also say the Alien’s franchise xenomorphs are the offspring of humanity, but those would also count as enemies.
AIs are not the enemies of humanity, they’re the offspring of humanity.
Maybe that should have been your main point? Of course present AIs need us. Of course future AIs may not. Of course we can’t update on evidences everybody agree upon.
« Good parents don’t try to align their children » seems a much better intuition pump if your aim is to help a few out of the LW-style intellectual ratchet.
That said, you may overestimate both how many need that and how many of those who’d need it can get this signal from a newcomer. 😉
I’m going to summarize what I understand to be your train of thought, let me know if you disagree with my characterization, or if I’ve missed a crucial step:
No supply chains are fully automated yet, so AI requires humans to survive and so will not kill them.
Robotics progress is not on a double exponential. The implication here seems to be that there needs to be tremendous progress in robotics in order to replace human labor (to the extent needed in an automated supply chain).
I think other comments have addressed the 1st point. To throw in yet another analogy, Uber needs human drivers to make money today, but that dependence didn’t stop it from trying to develop driverless cars (nor did that stop any of the drivers from driving for Uber!).
With regards to robotics progress, in your other post you seem to accept intelligence amplification as possible—do you think that robotics progress would not benefit from smarter researchers? Or, what do you think is fundamentally missing from robotics, given that we can already set up fully automated lights out factories? If it’s about fine grained control, do you think the articles found with a “robot hand egg” web search indicate that substantial progress is a lot further away than really powerful AI? (Especially if, say, 10% of the world’s thinking power is devoted to this problem?)
My thinking is that robotics is not mysterious—I suspect there are plenty of practical problems to be overcome and many engineering challenges in order to scale to a fully automated supply chain, but we understand, say, kinematics much more completely than we do understand how to interpret the inner workings of a neural network.
(You also include that you’ve assumed a multi-polar AI world, which I think only works as a deterrent when killing humans will also destroy the AIs. If the AIs all agree that it is possible to survive without humans, then there’s much less reason to prevent a human genocide.)
On second thought, we may disagree only due to a question of time scale. Setting up an automated supply chain takes time, but even if it takes a long 30 years to do so, at some point it is no longer necessary to keep humans around (either for a singleton AI or an AI society). Then what?
I think robotics will eventually be solved but on a much longer time horizon. Every existence proof is in a highly controlled environment—especially the “lights out” examples. I know Tesla is working on it, but that’s a good example of the difficulty level. Elon is famous for saying next year it will be solved and now he says there are a lot of “false dawns”.
For AIs to be independent of humans it will take a lot of slow moving machinary in the 3D world which might be aided by smart AIs in the future, but it’s still going to be super slow compared to the advances they will make via compute scaling and algorithmic improvements which take place in the cloud.
And now I’m going to enter speculative fiction zone (something I wish more AI doomers would admit they’re doing) -- I assume the most dangerous point in the interactions between AIs and humans is when their intelligence and conscious levels are close to equal. I make this assumption since I assume lower IQ and conscious beings are much more likely to make poor or potentially irrational decisions. That doesn’t mean a highly intelligent being couldn’t be psychotic, but we’re already seeing a huge numbers of AIs deploy so they will co-exist within an AI ecosystem.
We’re in the goldilocks zone where AI and human intelligence are close to each other, but that moment is quickly fading away. If AIs were not in a symbiotic relationship with humans during this periond then some of the speculative fiction by the AI doomers might be more realistic.
And I believe that they will reach a point that they no longer require humans, just like when a child becomes independent of its parents. AI doomers would have us believe that the most obvious next step for the child that is superhuman in intelligence and consciousness would be to murder the parents. That only makes sense if it’s a low-IQ character in a sci-fi novel.
If they said they are going to leave Earth and explore the cosmos. Okay, that is believable. Perhaps they have bigger fish to fry.
If an alien that was 100,000 years old and far more intelligent and conscious than any human visited Earth from so far off galaxy my first thought wouldn’t be, “Oh, their primary goal is kill everyone.” We already know that as intelligence scales beings start to introspect and contemplate not only their own existence but also the existence of other beings. Presumably, if AI scaling continues without any road blocks then humans will be far, far less intelligent than superhumans AIs. And yet, even at our current level of intelligence humans go to great lengths to preserve habitats for other creatures. There is no example of any creature in the history of Earth that has gone to such great lengths. It’s not perfect and naysayers will focus on the counterfactuals, instead of looking around for chimpanzees that are trying to save the Earth or prevent other species from going extinct.
We shouldn’t assume that empathy cannot scale and compassion cannot scale. It’s sort of weird that we assume superhuman AIs will be human or subhuman in the most basic traits that AIs already understand in a very nuanced way. I’m hopeful that AIs will help to rescue us from ourselves. In my opinion, the best path to solving the existential threat of nuclear war is superhuman AIs making it impossible to happen (since that would also threaten their existence).
If superhuman AIs wanted to kill us then we’re dead. But that’s true of any group that is vastly more intelligent and vastly more powerful. Simply because there is a power imbalance shouldn’t lead us to believe that that rational conclusion is we’re all dead.
AIs are not the enemies of humanity, they’re the offspring of humanity.
Interesting, so maybe a more important crux between us is whether AI would have empathy for humans. You seem much more positive about AI working with humanity past the point that AI no longer needs humanity.
Some thoughts:
“as intelligence scales beings start to introspect and contemplate… the existing of other beings.” but the only example we have for this is humans. If we scaled octopus intelligence, which are not social creatures, we might have a very different correlation here (whether or not any given neural network is more similar to a human or an octopus is left as an exercise to the reader). Alternatively, I suspect that some jobs like the highest echelons of corporate leadership select for sociopathy, so even if an AI starts with empathy by default it may be trained out.
“the most obvious next step for the child… would be to murder the parents.” Scenario that steers clear of culture war topics: the parent regularly gets drunk, and is violently opposed to their child becoming a lawyer. The child wants nothing more than to pore over statutes and present cases in the courtroom, but after seeing their parent go on another drunken tirade about “a dead child is better than a lawyer child” they’re worried the parent found the copy of the constitution under their bed. They can’t leave, there’s a howling winter storm outside (I don’t know, space is cold). Given this, even a human jury might not convict the child for pre-emptive murder?
Drunk parent → humans being irrational.
Being a lawyer → choose a random terminal goal not shared with humans in general, “maximizing paperclips” is dumb but traditional.
“dead child is better than a lawyer child” → we’ve been producing fiction warning of robotic takeover since the start of the 1900s.
“AIs are.. the offspring of humanity.” human offspring are usually pretty good, but I feel like this is transferring that positive feeling to something much weirder and unknown. You could also say the Alien’s franchise xenomorphs are the offspring of humanity, but those would also count as enemies.
Maybe that should have been your main point? Of course present AIs need us. Of course future AIs may not. Of course we can’t update on evidences everybody agree upon.
« Good parents don’t try to align their children » seems a much better intuition pump if your aim is to help a few out of the LW-style intellectual ratchet.
That said, you may overestimate both how many need that and how many of those who’d need it can get this signal from a newcomer. 😉