Other contemporary AI tech won’t be competitive with rogue AI when we can’t solve FAI, because any powerful AI will in that case itself be a rogue AI and won’t be useful for defense.
“AI” is becoming a dangerously overloaded term here. There’s AI in the sense of a system that does human-like tasks as well as humans (Specialized artificial intelligence), and there’s AI in the sense of a highly-self-modifying system with long-range planning, AGI. I don’t know what “powerful” means in this context, but it doesn’t seem clear to me that humans + ASI can’t be competitive with an AGI.
And I am skeptical that there will be radical improvements in AGI without corresponding improvements to ASI. it might easily be the case that humans + ASI support for high-productivity software engineering are enough to build secure networked systems, even in the presence of AGI. I would bet on humans + proof systems + higher-level developer tools being able to build secure systems, before AGI becomes good enough to be dangerous.
By “powerful AI” I meant AGI (terminology seems to have drifted there in this thread). Humans+narrow AI might be powerful, but can’t become very powerful without AGI, while AGI in principle could. AGI could work on its own narrow AIs if that potentially helps.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power. Security was mostly relevant in the discussion of quickly infecting the world and surviving an (implausibly powerful) extermination attempt, which only requires being able to anonymously infect a few hundred or thousands of computers worldwide, which even with good overall security seems likely to remain possible (perhaps through user involvement alone, for example after the first wave that recruits enough humans).
I’m now imagining a story in which there’s a rogue AI out there with a big bank account (attained perhaps from insider trading), hiring human proxies to buy equipment, build things, and gradually accumulate power and influence, before, some day, deciding to turn the world abruptly into paperclips.
It’s an interesting science fiction story. I still don’t quite buy it as a high-probability scenario or one to lie awake worrying about. An AGI able to do this without making any mistakes is awfully far from where we are today. An AGI able to write an AGI able to do this, seems if anything to be a harder problem.
We know that the real world is a chaotic messy place and that most interesting problems are intractable. Any useful AGI or ASI is going to be heavily heuristic. There won’t be any correctness proofs or reliably shortcuts.Verifying that a proposed modification is an improvement is going to have to be based on testing, not just cleverness. I don’t believe you can construct a small sandbox and train an AGI in that sandbox, and then have it work well in the wider world. I think training and tuning an AGI means lots of involvement with actual humans, and that’s going to be a human-scale process.
If I did worry about the science fiction scenario above, I would look for ways to thwart it that also have high payoff if AGI doesn’t happen soon or isn’t particularly effective at first. I would think about ways to do high-assurance financial transparency and auditing. Likewise technical auditing and software security.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power.
But it is not easy to use the money. You can’t “just” build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. I just don’t see that an AI could create a new Intel or Apple over a few years without its creators noticing anything.
The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions. A plan for world domination seems like something that can’t be concealed from its creators. Lying is no option if your algorithms are open to inspection.
“AI” is becoming a dangerously overloaded term here. There’s AI in the sense of a system that does human-like tasks as well as humans (Specialized artificial intelligence), and there’s AI in the sense of a highly-self-modifying system with long-range planning, AGI. I don’t know what “powerful” means in this context, but it doesn’t seem clear to me that humans + ASI can’t be competitive with an AGI.
And I am skeptical that there will be radical improvements in AGI without corresponding improvements to ASI. it might easily be the case that humans + ASI support for high-productivity software engineering are enough to build secure networked systems, even in the presence of AGI. I would bet on humans + proof systems + higher-level developer tools being able to build secure systems, before AGI becomes good enough to be dangerous.
By “powerful AI” I meant AGI (terminology seems to have drifted there in this thread). Humans+narrow AI might be powerful, but can’t become very powerful without AGI, while AGI in principle could. AGI could work on its own narrow AIs if that potentially helps.
You keep talking about security, but as I mentioned above, earning money works as well or probably better for accumulating power. Security was mostly relevant in the discussion of quickly infecting the world and surviving an (implausibly powerful) extermination attempt, which only requires being able to anonymously infect a few hundred or thousands of computers worldwide, which even with good overall security seems likely to remain possible (perhaps through user involvement alone, for example after the first wave that recruits enough humans).
Hmm.
I’m now imagining a story in which there’s a rogue AI out there with a big bank account (attained perhaps from insider trading), hiring human proxies to buy equipment, build things, and gradually accumulate power and influence, before, some day, deciding to turn the world abruptly into paperclips.
It’s an interesting science fiction story. I still don’t quite buy it as a high-probability scenario or one to lie awake worrying about. An AGI able to do this without making any mistakes is awfully far from where we are today. An AGI able to write an AGI able to do this, seems if anything to be a harder problem.
We know that the real world is a chaotic messy place and that most interesting problems are intractable. Any useful AGI or ASI is going to be heavily heuristic. There won’t be any correctness proofs or reliably shortcuts.Verifying that a proposed modification is an improvement is going to have to be based on testing, not just cleverness. I don’t believe you can construct a small sandbox and train an AGI in that sandbox, and then have it work well in the wider world. I think training and tuning an AGI means lots of involvement with actual humans, and that’s going to be a human-scale process.
If I did worry about the science fiction scenario above, I would look for ways to thwart it that also have high payoff if AGI doesn’t happen soon or isn’t particularly effective at first. I would think about ways to do high-assurance financial transparency and auditing. Likewise technical auditing and software security.
But it is not easy to use the money. You can’t “just” build huge companies with fake identities, or a straw man, to create revolutionary technologies easily. Running companies with real people takes a lot of real-world knowledge, interactions and feedback. But most importantly, it takes a lot of time. I just don’t see that an AI could create a new Intel or Apple over a few years without its creators noticing anything.
The goals of an AI will be under scrutiny at any time. It seems very implausible that scientists, a company or the military are going to create an AI and then just let it run without bothering about its plans. An artificial agent is not a black box, like humans are, where one is only able to guess its real intentions. A plan for world domination seems like something that can’t be concealed from its creators. Lying is no option if your algorithms are open to inspection.