[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions
From a paper by Center for Technology and National Security Policy & National Defense University:
“Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use.
National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The built infrastructure, for example, can be managed more effectively with embedded systems, saving energy and other resources. Increasingly sophisticated weak AI tools can offload much of the routine cognitive or decisionmaking tasks that currently require human operators. Assuming current systems move closer to strong AI capabilities, they could also play a larger and more significant role in problem solving, perhaps even for strategy development or operational planning. In the longer term, fully robotic soldiers may be developed and deployed, particularly by wealthier countries, although the political and social ramifications of such systems will likely be significant. One negative aspect of these trends, however, lies in the risks that are possible due to unforeseen vulnerabilities that may arise from the large scale deployment of smart automated systems, for which there is little practical experience. An emerging risk is the ability of small scale or terrorist groups to design and build functionally capable unmanned systems which could perform a variety of hostile missions.”
So strong AI is on the american military’s radar, and at least some involved have a basic understanding of the fact that it could be risky. The paper also contains brief overviews of many other potentially transformational technologies.
I haven’t read the entire piece but from your excerpt and a quick skim it looks like they are focusing on AI going wrong in prosaic ways like hacking the AI to change what it should do, not that a strong AI might have problems all on its own. I don’t know how much of that is that this piece is just an overview piece that discusses a variety of technological risks.
Yes, that does seem like the primary focus. However, they cite this article about Stephen Wolfram when saying “It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings.” which suggests that the researchers are at least aware of the wider risks around creating AGI, even if they don’t choose to focus on them.
Yikes, but that’s early. That’s a lot sooner than I would have said, even as a reasonable lower bound.
You have a credible reason for thinking it will take longer?
I’m no expert, but even Kurzweil—who, from past performance, is usually correct but over-optimistic by maybe five, ten years—doesn’t expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.
2020 is in five years. The kind of progress that would seem to imply—from where we are now to full-on human-level AI in just five years—seems incredible.
Kurzweil’s methodology for selecting those dates is suspect. He calculated the number of FLOPS he thinks, by back of the envelope whole-brain-emulation estimates, it would require to run a human-level AGI. The most powerful supercomputer today, Tianhe-2 in China, exceeds this level. So the human race has access to enough computing power to run even an inefficient emulative AGI today, by Kurzweil’s own estimates. The years he quotes are when that computing power would be available for $1,000 USD. If you believe in a takeoff scenario however, it should only matter when the first AGI is created, not how much it costs to buy the equipment to run another one.
So we have sufficient computational power today to run an artificial general intelligence. The problem then, is software. How long will it take to write the software underlying the first AGI? And for whatever value you claim, do you have credible reasoning underlying that choice?
Personally I think 5 years is a bit fast. But the quote was 2020s, the midpoint of which is still 10 years away. I think 10 years is doable if we really, really try. What’s your estimate?
Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.
Glad it’s of interest to you. I found it while checking the sources of this motherboard article. There’s another document linked from there which you may or may not have seen, but it lacks mention of strong AI, instead focusing on automating war in general.
After reading that I feel like in the near future it’ll be much easier to justify concrete AI takeover mechanisms to the public.