I would not worry at all about developing narrow AI.
An narrow AI employed with solving problems in molecular nanotechnology could be an existential risk nonetheless. It is just a question of scope and control. If it can access enough resources and if humans are sufficiently reckless in implenting whatever it comes up with, then you could end up with runaway real world MNT (if possible at all):
“We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation.”—The Automation of Science
...and...
“Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems. ”—Computer Program Self-Discovers Laws of Physics
This aim was achieved within 3000 generations, but the success was even greater than had been anticipated. The evolved system uses far fewer cells than anything a human engineer could have designed, and it does not even need the most critical component of human-built systems—a clock. How does it work? Thompson has no idea, though he has traced the input signal through a complex arrangement of feedback loops within the evolved circuit. In fact, out of the 37 logic gates the final product uses, five of them are not even connected to the rest of the circuit in any way—yet if their power supply is removed, the circuit stops working. It seems that evolution has exploited some subtle electromagnetic effect of these cells to come up with its solution, yet the exact workings of the complex and intricate evolved structure remain a mystery (Davidson 1997).
When the GA was applied to this problem, the evolved results for three, four and five-satellite constellations were unusual, highly asymmetric orbit configurations, with the satellites spaced by alternating large and small gaps rather than equal-sized gaps as conventional techniques would produce. However, this solution significantly reduced both average and maximum revisit times, in some cases by up to 90 minutes. In a news article about the results, Dr. William Crossley noted that “engineers with years of aerospace experience were surprised by the higher performance offered by the unconventional design”.
So what could possible happen if you add some machine intelligence and a bunch of irrational and reckless humans?
An narrow AI employed with solving problems in molecular nanotechnology could be an existential risk nonetheless.
That strikes me as mostly the risks inherent in molecular nanotech; the AI isn’t the problematic part. For example, is anything going to go wrong because a GA is optimizing satellite paths?
This seems to be a reference to a NewScientist article. Is this anything more credible to back up the claim that a GA managed to design a device whose workings human engineers have not managed to understand?
An narrow AI employed with solving problems in molecular nanotechnology could be an existential risk nonetheless. It is just a question of scope and control. If it can access enough resources and if humans are sufficiently reckless in implenting whatever it comes up with, then you could end up with runaway real world MNT (if possible at all):
...and...
Just look at what genetic algorithms and evolutionary computation can already do:
Another example:
So what could possible happen if you add some machine intelligence and a bunch of irrational and reckless humans?
That strikes me as mostly the risks inherent in molecular nanotech; the AI isn’t the problematic part. For example, is anything going to go wrong because a GA is optimizing satellite paths?
“Davidson 1997”
This seems to be a reference to a NewScientist article. Is this anything more credible to back up the claim that a GA managed to design a device whose workings human engineers have not managed to understand?