People talk about computer security as though it’s an arms race where the smarter side always wins.
Security is possible in principle (barring the cases like stupid/careless users manually launching content sent to them or found somewhere and granting it undue privileges), but very unlikely to become sufficiently reliable in practice anytime soon. At present, breaking into more and more computers is a matter of continuously applying some creative effort to the task, researching vulnerabilities and working around existing recognition-type defenses. In any case, earning money to buy additional computing power is similar for our purposes.
If you give me 100 random network-connected machines, it doesn’t give me 100 times the real computational power.
Yes. What matters is when several hundred (thousand) haphazardly connected computers is enough for the system to be capable enough to successfully work on its continued survival.
We have honeypots, internet telescopes, and suchlike today. I don’t think this process could be kept hidden now, and the defensive technology is steadily improving.
This is mildly plausible to succeed in permanently inhibiting stupid backup after AI is terminated by disrupting the Internet and most big networks. But it takes only one backup system, and there’s incentive to create many, with different restoration strategies.
And when only a few computers are sufficient to run an AI, all this becomes irrelevant, as it necessarily remains active somewhere.
Security is possible in principle… but very unlikely to become sufficiently reliable in practice anytime soon.
How soon is soon? I would bet on most systems not being vulnerable to remote exploits without user involvement within the next 10 years. I would not bet on dangerous self-improving AI within that timeframe.
Yes. What matters is when several hundred (thousand) haphazardly connected computers is enough for the system to be capable enough to successfully work on its continued survival.
Once the rogue-AI-in-the-net is slower at self-improvement than human civilization, it’s not so much of a threat. The world in which there’s a rogue-AI out there is probably also the world in which we have powerful-but-reliable automation for lots of human-controlled software development, too...
But it takes only one backup system, and there’s incentive to create many, with different restoration strategies.
And when only a few computers are sufficient to run an AI, all this becomes irrelevant, as it necessarily remains active somewhere.
This assumption strikes me as far-fetched. There presumably is some minimum quantity of code and data for the thing to be effective. It would be surprising if that subset fit on one machine, since that would imply that an effective self-modifying AI has low resource needs and that you can fit an effective natural-language processor into a memory much smaller than those used by today’s natural-language-processing systems.
By a few computers being sufficient I mean that computers become powerful enough, not that AI gets compressed (feasibility of which is less certain). Other contemporary AI tech won’t be competitive with rogue AI when we can’t solve FAI, because any powerful AI will in that case itself be a rogue AI and won’t be useful for defense (it might appear useful though).
Other contemporary AI tech won’t be competitive with rogue AI when we can’t solve FAI, because any powerful AI will in that case itself be a rogue AI and won’t be useful for defense.
“AI” is becoming a dangerously overloaded term here. There’s AI in the sense of a system that does human-like tasks as well as humans (Specialized artificial intelligence), and there’s AI in the sense of a highly-self-modifying system with long-range planning, AGI. I don’t know what “powerful” means in this context, but it doesn’t seem clear to me that humans + ASI can’t be competitive with an AGI.
And I am skeptical that there will be radical improvements in AGI without corresponding improvements to ASI. it might easily be the case that humans + ASI support for high-productivity software engineering are enough to build secure networked systems, even in the presence of AGI. I would bet on humans + proof systems + higher-level developer tools being able to build secure systems, before AGI becomes good enough to be dangerous.
By “powerful AI” I meant AGI (terminology seems to have drifted there in this thread). Humans+narrow AI might be powerful, but can’t become very powerful without AGI, while AGI in principle could. AGI could work on its own narrow AIs if that potentially helps.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power. Security was mostly relevant in the discussion of quickly infecting the world and surviving an (implausibly powerful) extermination attempt, which only requires being able to anonymously infect a few hundred or thousands of computers worldwide, which even with good overall security seems likely to remain possible (perhaps through user involvement alone, for example after the first wave that recruits enough humans).
I’m now imagining a story in which there’s a rogue AI out there with a big bank account (attained perhaps from insider trading), hiring human proxies to buy equipment, build things, and gradually accumulate power and influence, before, some day, deciding to turn the world abruptly into paperclips.
It’s an interesting science fiction story. I still don’t quite buy it as a high-probability scenario or one to lie awake worrying about. An AGI able to do this without making any mistakes is awfully far from where we are today. An AGI able to write an AGI able to do this, seems if anything to be a harder problem.
We know that the real world is a chaotic messy place and that most interesting problems are intractable. Any useful AGI or ASI is going to be heavily heuristic. There won’t be any correctness proofs or reliably shortcuts.Verifying that a proposed modification is an improvement is going to have to be based on testing, not just cleverness. I don’t believe you can construct a small sandbox and train an AGI in that sandbox, and then have it work well in the wider world. I think training and tuning an AGI means lots of involvement with actual humans, and that’s going to be a human-scale process.
If I did worry about the science fiction scenario above, I would look for ways to thwart it that also have high payoff if AGI doesn’t happen soon or isn’t particularly effective at first. I would think about ways to do high-assurance financial transparency and auditing. Likewise technical auditing and software security.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power.
But it is not easy to use the money. You can’t “just” build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. I just don’t see that an AI could create a new Intel or Apple over a few years without its creators noticing anything.
The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions. A plan for world domination seems like something that can’t be concealed from its creators. Lying is no option if your algorithms are open to inspection.
Security is possible in principle (barring the cases like stupid/careless users manually launching content sent to them or found somewhere and granting it undue privileges), but very unlikely to become sufficiently reliable in practice anytime soon. At present, breaking into more and more computers is a matter of continuously applying some creative effort to the task, researching vulnerabilities and working around existing recognition-type defenses. In any case, earning money to buy additional computing power is similar for our purposes.
Yes. What matters is when several hundred (thousand) haphazardly connected computers is enough for the system to be capable enough to successfully work on its continued survival.
This is mildly plausible to succeed in permanently inhibiting stupid backup after AI is terminated by disrupting the Internet and most big networks. But it takes only one backup system, and there’s incentive to create many, with different restoration strategies.
And when only a few computers are sufficient to run an AI, all this becomes irrelevant, as it necessarily remains active somewhere.
How soon is soon? I would bet on most systems not being vulnerable to remote exploits without user involvement within the next 10 years. I would not bet on dangerous self-improving AI within that timeframe.
Once the rogue-AI-in-the-net is slower at self-improvement than human civilization, it’s not so much of a threat. The world in which there’s a rogue-AI out there is probably also the world in which we have powerful-but-reliable automation for lots of human-controlled software development, too...
This assumption strikes me as far-fetched. There presumably is some minimum quantity of code and data for the thing to be effective. It would be surprising if that subset fit on one machine, since that would imply that an effective self-modifying AI has low resource needs and that you can fit an effective natural-language processor into a memory much smaller than those used by today’s natural-language-processing systems.
By a few computers being sufficient I mean that computers become powerful enough, not that AI gets compressed (feasibility of which is less certain). Other contemporary AI tech won’t be competitive with rogue AI when we can’t solve FAI, because any powerful AI will in that case itself be a rogue AI and won’t be useful for defense (it might appear useful though).
“AI” is becoming a dangerously overloaded term here. There’s AI in the sense of a system that does human-like tasks as well as humans (Specialized artificial intelligence), and there’s AI in the sense of a highly-self-modifying system with long-range planning, AGI. I don’t know what “powerful” means in this context, but it doesn’t seem clear to me that humans + ASI can’t be competitive with an AGI.
And I am skeptical that there will be radical improvements in AGI without corresponding improvements to ASI. it might easily be the case that humans + ASI support for high-productivity software engineering are enough to build secure networked systems, even in the presence of AGI. I would bet on humans + proof systems + higher-level developer tools being able to build secure systems, before AGI becomes good enough to be dangerous.
By “powerful AI” I meant AGI (terminology seems to have drifted there in this thread). Humans+narrow AI might be powerful, but can’t become very powerful without AGI, while AGI in principle could. AGI could work on its own narrow AIs if that potentially helps.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power. Security was mostly relevant in the discussion of quickly infecting the world and surviving an (implausibly powerful) extermination attempt, which only requires being able to anonymously infect a few hundred or thousands of computers worldwide, which even with good overall security seems likely to remain possible (perhaps through user involvement alone, for example after the first wave that recruits enough humans).
Hmm.
I’m now imagining a story in which there’s a rogue AI out there with a big bank account (attained perhaps from insider trading), hiring human proxies to buy equipment, build things, and gradually accumulate power and influence, before, some day, deciding to turn the world abruptly into paperclips.
It’s an interesting science fiction story. I still don’t quite buy it as a high-probability scenario or one to lie awake worrying about. An AGI able to do this without making any mistakes is awfully far from where we are today. An AGI able to write an AGI able to do this, seems if anything to be a harder problem.
We know that the real world is a chaotic messy place and that most interesting problems are intractable. Any useful AGI or ASI is going to be heavily heuristic. There won’t be any correctness proofs or reliably shortcuts.Verifying that a proposed modification is an improvement is going to have to be based on testing, not just cleverness. I don’t believe you can construct a small sandbox and train an AGI in that sandbox, and then have it work well in the wider world. I think training and tuning an AGI means lots of involvement with actual humans, and that’s going to be a human-scale process.
If I did worry about the science fiction scenario above, I would look for ways to thwart it that also have high payoff if AGI doesn’t happen soon or isn’t particularly effective at first. I would think about ways to do high-assurance financial transparency and auditing. Likewise technical auditing and software security.
But it is not easy to use the money. You can’t “just” build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. I just don’t see that an AI could create a new Intel or Apple over a few years without its creators noticing anything.
The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions. A plan for world domination seems like something that can’t be concealed from its creators. Lying is no option if your algorithms are open to inspection.