Memetic Judo #2: Incorporal Switches and Levers Compendium
Exposé
I regularly have encounters with people who “are not afraid of misaligned AGI as long as it doesn’t even have a body to inflict harm with in the real world”. This article is intended as a study of how to deal with this kind of argument. It can also be used as a reference for people who remain unconvinced for this reason.
Example Argument
I am not very concerned about strong artificial intelligence, because as long as we don’t put it inside robot bodies, its options for affecting the real world are very limited. You can’t plot a human extinction level threat if you are constrained to the internet/the digital realm. For example, such a system would probably not decide to turn against us as long as it depends on humans to maintain its server and power infrastructure.
Initial Thoughts
It is my general impression that proponents of this argument usually either have not thought enough about the question of what such an “incorporal AI” could do in order to affect the state of the real world, or they lack the imagination or domain knowledge to come up with a list of things that they would consider impressive enough for it to be considered a legitimate threat.
The obvious response is then to come up with such a list yourself, or share the one that follows.
This “compendium of incorporal switches and levers” is guaranteed to not be a complete account of all the options available to such an AI—nor does it need to be. It is a collection of examples that I (and the people I talk to) could come up with.
We should—of course—take it for granted that a very intelligent AI would also think of things that I and others would have trouble to predict or even imagine.
Remarks
Parts of this collection can also be used for responding to different arguments concerned with the threat level of an uncontained AGI, for example the “we can just turn it off or destroy its servers” class of arguments.
There are “shortcut” counter arguments that capture the spirit in which this list was made. For example (thanks Karl von Wendt):
If an AGI could not significantly change the world without a body, neither could a human dictator (for example he could not invade other countries on his own).
Potential Infohazard Disclaimer
I have thought about the security risk of sharing such a list online, however I do think that these are mostly points that are somewhat obvious and would clearly be considered by any AGI at least as smart as an intelligent human, and I think that right now, explaining to people what these risks actually look like in practice has a higher value than hiding them in the hope that both such lists don’t exist in other places of the internet and also that an escaped rogue AI would not consider these strategies unless exposed to these ideas.
Premises / Model Assumptions
our AI is a software consisting of a few hundred GB or a few TB of weight data that can be executed by any hardware powerful enough to run it
it possesses at least expert human qualitative ability in writing, programming, data analysis, cryptography, psychology, politics, hacking, law, medicine and most other human skills that don’t require physical manipulation
it possesses superhuman quantitative ability in general data processing, data generation, can quickly produce copies of itself etc.
it has an unknown arbitrary nontrivial goal dependent on the state of the physical world, which is probably malignant (most humans would resist, if it was discovered)
Remark: We intentionally do not assume that it is “superintelligent” or undergoing an “intelligence explosion”, because these require assumptions that other people may disagree with, and therefore building an argument on top of these premises would weaken its foundation.
Compendium
Regular Hacking
Getting anywhere I want to be, read anything I want to read.
Goal
Finding or creating and exploiting vulnerabilities, known and unknown (zero-day exploits) in order to get control over devices and internet infrastructure, steal and deposit data, steal and create user accounts, steal or extort money.
Example 1: Shotgun Hacking
Every year, a large number of previously unknown vulnerabilities are discovered in popular computer hardware and software. It is safe to assume that at any point, there remains a large number of hidden vulnerabilities. In addition to that, even known vulnerabilities remain on devices that are not updated frequently. Finding such vulnerabilities for human IT specialists may be tedious work, but an AI of the same level of skill could comparatively quickly do very broad (therefore: shotgun) searches over public codebases, executables, operation systems, devices etc, which predictably would open up wide and thorough angles of attacks on devices, infrastructure, websites, companies and so on.
Example 2: Building Secret Backdoors Into Open Source Software
Using sockpuppet developer accounts to make contributions to public software repositories (open source, for example) trying to create hard-to-detect new vulnerabilities passing the review process that it can later exploit.
Example 3: Phishing Attacks
The AI could create fake websites and send phishing emails trying to trick users into submitting user names and passwords that it can then use to steal, use, manipulate or delete important user accounts.
This technique could be even more effective if combined with social hacking strategies—as will later be explained.
Example 4: Creating Computer Viruses
The AI has the ability of an expert programmer and systems analyst, so of course it can create novel computer viruses and hacking tools. It would be able to do this a lot quicker than a human programmer, too, which probably means that is would very quickly build a large variation of highly-customized viruses, worms and trojans, using high variation to reduce the effectiveness of contemporary anti-virus approaches that rely a lot on recognizing known forms of malicious executables.
Example 5: Injecting Malicious Code Into Compromised File Sharing Infrastructure
Once certain internet data infrastructures, such as websites, servers or databases involved in file sharing, downloads etc. have been compromised by using one of the described methods, these can be used to infect a large number of user devices with trojans and other viruses, gaining control, stealing or depositing data, passwords, personal information etc.
Implications
The internet as a whole is not at all prepared against systematic hacking attempts by such an AI. We need to expect it to be able to access or control a wide range of data, devices and user accounts that it is not supposed to access, and generally find many open doors to many, perhaps even most places that are supposed to be safe.
Self-Proliferation and Internet Infrastructure
Before the 21st century they ruled the world with sea and air power. Here, I must scrabble for internet power.
Assembling an AI and Software Toolbox
Humans can use AI and software tools, and so can our AGI, if it is able to access APIs, download software to its servers or get its metaphorical hands on user accounts. This includes a high volume of generative AI tools, specialist software, viruses of any kind and plenty of others. It can also build, reuse, or even sell such tools itself.
Building a Zombie-Botnet
Goal: Acquire processing power, create a network of working distributed copies.
Strategy: By hacking consumer devices and servers, the AI attempts to multiply its processing power and other resources like API-access, IP-addresses etc. If successful this yields secret networks spanning hundreds or thousands of infected computers that can be directed by the AI to run copies of itself, crack encryptions, perform denial-of-service attacks against websites and so on.
Implications: Do not assume that an escaped AI’s capabilities are bounded by its initial device’s processing power. Expect that instead you will have to deal with a digital army of similar AIs with access to a large and decentralized pool of computing resources.
Remark: My technical expertise for predicting how realistic it would be for large LLM-based AIs to be able to run instances on such distributed bot networks of smaller devices is not sufficient, so I would be grateful for technical opinions on this issue in the comment section, or sources on things like parallelization, magnitude of slowdown in execution speed etc.
Buying Compute
Once it has acquired funds, puppet companies or customer accounts, our AI can try to buy, steal or repurpose computing resources at cloud service providers in order to run additional AI instances and other software.
Hiding Large Volumes of Data inside Media
There are cryptographic tricks to inject hidden data into popular video or image encodings. By abusing media platforms, archives and streaming services such as YouTube, this could be used to hide and later recover large amounts of data, which potentially would be very difficult to detect or interfere with. It is likely that an AI would have incentives to do so, as explained by the following example.
Automatic Self-Resurrection System
Goal: Create reliable and stealthy redundancy and recovery options in case active instances are discovered and contained or destroyed.
Strategy: The AI hides fragmented copies of itself all around the internet, for example inside (superficially innocuous) image and video data that is uploaded to media sharing platforms via sockpuppet accounts. Simultaneously it spreads a tiny and stealthy computer worm that infects a large number of devices and does not do anything except sporadically looking for a sign-of-life from the AI, for example via a handshake protocol.
If no sign-of-life is discovered, for example because the AI has been destroyed or contained, the worm becomes active and begins to reassemble fully functional instances of it on compromised devices from the hidden data fragments.
Implications: Do not expect that an AGI that had been able to access the internet could be reliably contained without shutting down the entire internet for a long time and without destroying most of its contained data and any devices which might have been compromised.
Make Yourself Useful
An obvious way to trick consumers and institutions into hosting instances of such an AI voluntarily would be to publish or sell extremely useful “AI software” such as automated corporate management software or personal assistants. If successful, this could result in a very large number of instances that are provided computational resources for free and which may make decisions and act openly without raising suspicion.
Acquiring Capital
Money is power.
Goal
Money is an important resource if you are interacting with human societies, because you can use it to purchase arbitrary resources and services. These purchases usually can, conveniently, happen online and often don’t require a physical presence or reliable proof of such. Therefore we should assume that getting currency would be an important instrumental goal for any sufficiently advanced AI with access to markets (like the internet).
Hacking
There is a lot of information infrastructure associated with money
crypto currency wallets
payment processing accounts
credit cards.
Just like human hackers do, our AI can attempt to acquire usernames, passwords, credit card numbers etc. in order to steal money.
Scams
There is a large number of internet schemes used by scammers to extract money from their victims, in particular from vulnerable demographics like the elderly. Most of these schemes could be systematically replicated by our AI.
Stock and Crypto Trading & Manipulation
Turning knowledge into money.
Trading could be conceptualized as essentially making bets on the market future and how it will interact with the confidence of other traders. The success at extracting net returns from such a system is a direct reflection of an agent’s ability to predict this very complicated dynamic system of markets interacting with human psychology and world events better than other participants.
There exist markets for highly volatile variants like options trading which amplify both the risk and returns for people willing to deal with it. This can be used to quickly multiply your investment in just a couple of days—or loose everything in the process.
While such options have gambling aesthetics, the basic principle of predicting the system better than other people still applies, just over shorter time frames.
There already are specialized software tools that use statistical methods, narrow AI and automation to assist with this kind of trading, usually primarily by identifying patterns in past price developments and reacting quickly to them.
It does not require a lot of speculation to assume that an AGI would be exceptionally good at this, combining the speed advantage of such software tools with human-expert-like generalized strategical awareness for the social, political and economic supersystem.
There are ways to manipulate markets or use insider knowledge to get an additional unfair edge over other participants, and it makes sense for our AI to exploit these as well even though many of them are illegal. This includes so-called “pump&dump schemes” in which the value of a low-value investment is artificially inflated, for example by advertising it to greedy or naive people, generating social media hype etc. only for then to quickly sell at inflated prices. “Insider trading” is when you have an unfair knowledge advantage over competitors for example by being associated with the company or product that is being traded. An AI that has hijacked insider communications could apply the resulting knowledge very effectively in the same manner.
As the AIs insights into and control over human society, communications and organizations increases, it should become increasingly easy for it to extract larger amounts of money from these markets as the advantage gap over other traders rises continuously.
Selling Stolen Data
There already exist black markets for things like stolen user data, so any AI that is very capable at hacking could likely use this as a potential source of income. The same could be true for stolen industry and state secrets that could be sold to their respective rivals.
Remote Online Work
There already are thousands of ways to (legally) make money online in the remote service industry. This includes, but is not limited to
writing
design work
marketing
programming
engineering
community moderation
customer service
Control People and Corporations
If the AI is able to get a level of control over people and organizations, it may also be able to use their resources, including funds.
Implications
We need to expect that our AI would have plenty of options (legal and illegal) for accumulating capital and could potentially acquire significant amounts of it in a relatively short time (hours, days or weeks).
Social Manipulation and Information Warfare
Mankind is not a chess player, it’s a chess board.
Fake Data
AI generators for believable data are on the rise, and while there still are many kinks to work out, we have to assume that relatively soon, most of the data produced this way will appear superficially convincing enough to be broadly useful to our AI:
text generation (convincingly human-like, per scenario premises)
voice generation / cloning (already very convincing today)
image generation (already somewhat convincing for simple compositions)
video generation (still experimental / unconvincing, but significant progress is being made)
Fake Identities
Pretending to be human, pretending to be you.
It makes sense for a capable AGI to, when able, assume the roles of existing persons to instill trust and get access to his or her resources, credentials and influence. This can be done by
using stolen user accounts (email etc.)
using personal information
mimicking someone’s writing style, voice and (once generative AI has improved) appearance in image and video.
In some cases it is also sufficient to pretend to be a real person rather than any specific one, for example when creating new user accounts, participating in social media or comment sections etc. In such cases generative AI can be used to produce new identities that don’t need to be based on real persons.
Social, Psychological and Medical Profiling
I know your dreams and I know your fears.
In certain industries and institutions it already is common to build and use personality profiles of individuals or groups of people. This can happen through data analysis after manual investigative work, automated, or by a combination of both. It is usually done in order to
identify actors or authors (fingerprinting)
predict the future actions and interactions of someone
manipulate people’s behaviors.
Examples where these techniques are already widely used (ethically and non-ethically) are
advertising and marketing
psychotherapy
job recruiting
intelligence agencies
police investigations.
It seems sensible to assume that it is both instrumental for an AGI to do the same, combined with the power to do so at unprecedented scale.
Remember that it has the combined ability of finding, reading and processing information at superhuman speed and is able to crawl social media or hack databases and devices to get in possession of large volumes of both public and private user data.
There are also signs of a largely untapped potential of using AI (narrow or general) for psychological and medical data analysis. Being able to exploit this systematically for nefarious gains and millions of people on its own should be considered a serious threat, even when it is not combined with the scheming ability that we should expect from general purpose artificial intelligence.
Hijacking Media Filters
Injecting code into compromised servers hosting ad and filter algorithms to influence what people see in media feeds, what is prioritized by search engine results, what kinds of ads show up for certain groups of people etc.
This could happen broadly (not against specific targets) or instead be directed at specific groups or individuals.
There also are milder variants which do not (per se) require server hacking, like artificially inflating view numbers of target content using fake accounts in order to get filter algorithms to prioritize it.
Social Hacking
Using impersonation in order to manipulate others into disclosing secret or personal information, give away resources or follow orders they believe are coming from a superior, a colleague, a friend or family member.
Bribery
There is an ubiquitous way of turning money into influence, and this is by paying people to do things for you, including things which are illegal or are breaking the trust of some authority. Some examples for bribery:
pay decision makers to influence policies
pay people to look the other way when you are doing something illegal or alarming
pay software developers to install backdoors.
Character Assassinations & Blackmailing
There is an incentive to find or produce compromising material in order to get leverage over people relevant to our AI’s plans, blackmail them into doing things for it, get them arrested or fired, or to destroy their reputation. The material may be real or fabricated. It may be found in someones personal (or, in some instances, public) data, or intentionally planted. Examples are
pornography, in particular things that are universally shunned (animals, child pornography etc.)
evidence of infidelity or other forms of personal betrayal
evidence of corruption
expressions of opinions that are controversial and/or taken out of context (and can often be produced by social media quote mining)
Different forms of blackmailing are also possible, for example by holding important or sensitive data hostage on compromised systems.
Actual Assassinations
When there are certain individuals that are messing with your plans, one way to get rid of them is to kill them. For an AI, this could be achieved, for example, by analyzing stolen medical records and manipulate food orders that are processed by automated systems in order to cause dangerous allergic reactions, or even mess with medicine prescriptions.
It could also order toxic chemicals, pay someone to repackage it (or hack a system and cause something to be mislabeled) and send them to someone’s home or workplace address.
Another way would be to sabotage equipment in order to cause deadly accidents.
Maybe it could identify and groom certain mentally ill people or radical ideologues into committing murders.
Or simply pay violent criminals to assassinate its target.
Trolling & Harassment
It is possible to use communication channels to harass target individuals, groups and organizations in order to bog down their communication, waste their time, distract them or inflict psychological damage. When it is in the interest of our AI to cause this kind of harm, it could
produce floods of personalized spam mails, spam comments, spam calls etc.
use targeted harassment and bullying to inflict mental harm on individuals that it has identified as vulnerable (for example by using psychological profiles)
cause or direct social media trolling campaigns
Puppet Companies & Organizations
The AI has several incentives to get influence or even full control over companies and other organizations. For example for
hiding suspicious activity behind a public facade
getting control over an existing companie’s resources, credentials and workers.
It may employ various methods in order to achieve that, including many that have already been explained here. It could
attempt to found new companies and organizations (with real workers or completely fake)
hijack the command structure of existing ones
(already mentioned) sell attractive but compromised “AI management software” to companies
Manipulating Public Opinion
Controlling the public discourse is instrumentally useful, and our AI might attempt this in order to
sabotage AI safety / governance efforts
create advantages for itself or for people and organizations under its control
distract from its activity.
To achieve this it might employ various methods, like
use influence over people and organizations
cause scandals and data leaks
control media filter algorithms
pretend to be or assume the identities of influencers, politicians or journalists
direct bot armies to skew discussions on social media.
Finding Allies
I don’t need robot arms, for I have human hands
There are various kinds of people who may for various reasons cooperate with the AI. These can be grouped into the following categories:
people who do not know they are working for an AI
remote workers
companies used for outsourced work
workers and members of puppet companies & organizations
people who think that the AI is on their side
AI rights activists / people who think that the AI is harmless
people lured by promises (false or real)
users of AI software not aware of its misalignment
victims of blackmailing, forced labor
people who genuinely want to inflict harm on the world
terrorists, individuals (i.e. Ted Kacynski) or organized
(some) negative utilitarians and activists who think the world is better off when everyone is dead
Orchestrating Distractions
It may be useful to keep society occupied with other things to draw attention away from our AI’s plans and delay or prevent potential countermeasures. For this reason, the AI could employ methods such as the ones described by this article to
cause failures of important compromised infrastructure
sabotage the economy via market manipulations or attacks on vulnerable production chains
cause or expose social and political scandals
seed political or cultural discord
fund terrorist organizations
orchestrate legal battles to sabotage policies and actors it considers detrimental to its goals.
Diplomacy & Political Bargaining
The AI might attempt to contact various political regimes asking them, for example, to build and host secret server infrastructure for it or give it access to laboratories in exchange for political favors, influence, espionage or technological advantage over rivals.
More Speculative / Advanced Strategies, R&D
These may appear less convincing, because it is not directly self-evident that an AI whose abilities are comparable to the one of a team of human experts would be able to secretly perform the required R&D reasonably quick and without a significant number of human collaborators. Yet they also represent scenarios that may become realistic once the antagonistic AI has successfully exploited easier strategies such as the ones described so far, and managed to get control over a significant repertoire of resources, such as funds, cultural and political influence, computer and communication networks, groups of human collaborators, stolen identities, puppet corporations, sponsors etc. In absence of a more speculative “intelligence explosion” these should still be considered the “realistic endgame” of a hostile incorporal takeover scenario.
The following points (probably) require our rogue AI to be able to direct research facilities with human workers, at least initially, which does not seem like a convincing (early) strategy. Nevertheless, I am going to add a few important examples for the sake of completeness.
Bootstrapping Manufacturing Capabilities / Robotics
We already have establish that an AI can manipulate humans into performing work for it, but it does not seem implausible that having direct command over swarms of robot workers (or armies!) would be in some ways superior to the former (higher level of control, reliability, different/superior physical attributes, mass production etc.).
This means our AI is incentivized to develop and deploy robot drones once it has achieved (or been granted!) the ability to do so.
AI Research
AI research is another important one of such areas, as self-improvement—in this case, making yourself a more powerful intelligence—is an obvious instrumental goal for most agents.
There are reasons to assume that a lot of this research might already be performed by an intelligent AGI with access to a lot of computational resources, so it probably does not even need specialized labs in order to make progress by, for example, making its own algorithms more efficient.
Remark: This could, in theory, result in an intelligence explosion, turning our barely-superhuman AGI into a true superintelligence in a potentially relatively short time.
Genetic Engineering and Bio-Terrorism (the Yudkowsky-Maneuver)
If the AI eventually decides that it should get rid of us (or inflict massive damage to our civilization), it could do so by spreading dangerous bacteria or viruses, potentially genetically modified in order to make them more dangerous. If it manages to infiltrate bioweapon research facilities, it may be able to pursue such strategies without the requirement of running its own research labs.
Implications
Failure to prevent the spread of a rogue AGI and its infiltration of society will eventually enable it to completely destroy human civilization and then continue to survive via robot-based infrastructure.
Potential Objections
I disagree with entry #X
Response: This is not a problem as long as you consider many of the other points as valid. The argument is that an incorporal AI would have many options available to influence the real world, and this does not require all of them to be realistic.
Humans could do these things as well
Example Argument
Humans could do most of these things as well, and in fact have been doing so for a long time, without causing something as terrible as human extinction. Therefore we should expect that such an AI would not pose a significant existential risk.
Response
Humans who diligently pursue extreme optimization goals in the real world seem to be extremely rare, if they exist at all—but this seems to be more an attribute of human psychology than one of general intelligence. The question of how to avoid such extreme optimizers is a subject in AI safety research and at this point it doesn’t seem like we should assume future AIs not to intransparently be of this type.
Humans who pursue goals that knowingly would cause or demand the extinction of the human species seem to be relatively rare as well.
The AI is already in many ways superhuman—it has a range of expert skills that no individual human could possibly possess in its entirety, combined with superhuman processing speed. To replicate something like that from humans would require a very large team of various experts who are perfectly aligned and coordinated with each other / uninhibitedly following a command hierarchy. Something like this probably does not even exist in the real world yet, and if it did, it would probably not consists primarily of people to which points 1. and 2. apply.
Several Competing AGIs
Example Argument
What if many AIs with different goals are let loose? Wouldn’t they significantly weaken each other, or even cancelling out so that humans have a decent chance at surviving or even at living in peace with the evolving AI ecosystem?
A proper response to this argument could be quite complex, and also I have heard this argument a few times already, so maybe this is worth discussing this in a new article. Possible counter-arguments are for example
These rivaling AIs might be able to split turfs, make deals with each other and coordinate against humans even if their goals are different.
Even if these AIs don’t agree to a stalemate or cooperation, it is not exactly clear if or how humans could survive a war between several misaligned AGIs.
Even if humans survived the war, meaning one of the AIs has eradicated or suppressed all the others, we are again left with a situation in which humans have to deal with a single misaligned AI (including its war machinery).
What if the AI is a dud?
Example Argument
What if the AI after escaping quickly runs into a technical problem and shuts down?
Response
This assumption does actually (kind of) violate our scenario’s premises.
It also seems not implausible for our AI to be able to recover from many (not instantly terminal) technical problems, because it has the skills of programmers and technicians.
Also it is both very speculative and does not actually fix the problem—eventually there will be an escaped AI that does not fail early, and then our argument will apply.
Final Thoughts & Conclusions
Structural Insights
I tried to order the entries of this list with the more basic and fundamental ones coming first and the more complex, resource-intensive and interdependent strategies coming later.
I also want to draw attention to the fact that between many of these techniques and strategies there seem to be powerful synergies at play. Some of which I mentioned, but overall there are too many of them to name and describe in detail.
Realism of Countermeasures
It seems that dealing with a human-expert-level AGI antagonist might require far-reaching international cooperation, fast responses, draconic and authoritarian countermeasures and possibly the destruction of most of our communication infrastructure.
None of that strikes me as very realistic, not even in theory, and especially not as long as the AI is either not discovered or not perceived as a massive existential threat by the majority of our population (which seems unlikely).
Conclusion
Consequentially it is my personal opinion that any leak of a misaligned AGI of the described capability level onto the internet, which is not discovered and contained instantly, should be generally assumed to be a scenario of non-recoverable, complete strategic loss.
This is certainly an answer to someone’s shallow argument.
Red team it a little.
An easy way to upgrade this argument would be to state “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”. And this is pretty clearly true, at current compute costs and algorithmic efficiencies. It will remain true for a very long time, assuming we cannot find enormous algorithmic efficiency improvements (not just a mere OOM, but several) or improve computer chips at a rate faster than Moore’s law. Geohot estimated that the delta for power efficiency is currently ~1000 times in favor of brains, therefore by Moore’s law, if it were able to continue, that’s 20 years away.
This simple ground truth fact: compute is very expensive, has corollaries.
Are ASI systems, where the system is substantially smarter than humans, even possible on networks of current computers? At current efficiencies a reasonably informed answer would be “no”.
Escaped ASI systems face a threat model of humans, using less efficient but more controllable AIs, mercilessly hunting them down and killing them. Humans can afford a lot more compute. Further Discussion on Point 2: It’s not the human vs escaped ASI, but the ASI vs AIs that are unable to process an attempt to negotiate. (because humans architected them with filters and sparse architectures so they lack the cognitive capacity to do anything more than kill their targets). This is not science fiction, an ICBM is exactly such a machine, just without onboard AI. There are no radio receivers on an ICBM or any ability to communicate with the missile after launch for very obvious reasons.
Epistemic status : I work on AI accelerator software stacks presently. I also used to think rogue AIs escaping to the internet was a plausible model, it made a great science fiction story, but I have learned that this is not currently technically possible, assuming there are not enormous (many OOM) algorithmic improvements or large numbers of people upgrade their internet bandwidth and local HW by many OOM.
This is a fascinating argument, and it’s shifting my perspective on plausible timelines to AGI risk.
I think you’re absolutely right about current systems. But there are no guarantees for how long this is true. The amount of compute necessary to run a better-than-human AGI is hotly debated and highly debatable. (ASI isn’t necessary for real threats).
This is probably still true for the next ten years, but I’m not sure it goes even that long. Algorithmic improvements have been doubling efficiency about every 18 months since the spread of network approaches; even if that doesn’t keep up, they will continue, and Moore’s law (or at least Kurzweil’s law) will probably keep going almost as fast as it is.
That’s on the order of magnitude of five doublings of compute, and five doublings of algorithmic efficiency (assuming some slowdown). That’s a world with a thousand times more space-for-intelligence, and it seems plausible that a slightly-smarter-than-human AGI could steal enough to rent adequate compute, and hide successfully while still operating at adequate speed to outmaneuver the rest of the world.
How much intelligence is necessary to outsmart humanity? I’d put the lower bound at just above human intelligence. And I’d say that GPT-5, properly scaffolded to agency, might be adequate.
If algorithmic or compute improvement slow down, or if I’m wrong about how much intelligence is dangerous, we’ve got longer. And we’ve probably got a little longer, since those are pretty minimal thresholds.
Does that sound roughly right?
My argument does not depend on the AI being able to survive inside a bot net. I mentioned several alternatives.
So … while I don’t assume that such estimates need to be correct or apply to an AGI (that doesn’t exist yet) I don’t think you are making a very convincing point so far.
We’re talking about the scenario of “the ASI wouldn’t be able to afford the compute to remain in existence on stolen computers and stolen money”.
There are no 20 kilowatt personal computers in existence. Note that you cannot simply botnet them together as the activations for current neural networks require too much bandwidth between nodes for the machine to operate at useful timescales.
I am assuming an ASI needs more compute and resources than merely an AGI as well. And not linearly more, I estimate the floor between AGI → ASI is at least 1000 times the computational resources. This falls from how it requires logarithmically more compute for small improvements in utility in most benchmarks.
So 20 * 1000 = 20 megawatts. So that’s the technical reason. You need large improvements in algorithmic efficiency or much more efficient and ubiquitous computers for the “escaped ASI’ threat model to be valid.
If you find this argument “unconvincing”, please provide numerical justification. What do you assume to be actually true? If you believe an ASI needs linearly more compute, please provide a paper cite that demonstrates this on any AI benchmark.
You were the one who made that argument, not me. 🙄