I think the opposite: ARA is just not a very compelling threat model in my mind. The key issue is that AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them down. While doing all that, in order to stay relevant, they’ll need to recursively self-improve at the same rate at which leading AI labs are making progress, but with far fewer computational resources. Meanwhile, if they grow large enough to be spending serious amounts of money, they’ll need to somehow fool standard law enforcement and general societal scrutiny.
Superintelligences could do all of this, and ARA of superintelligences would be pretty terrible. But for models in the broad human or slightly-superhuman ballpark, ARA seems overrated, compared with threat models that involve subverting key human institutions. Remember, while the ARA models are trying to survive, there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement). These seem much more concerning.
Why then are people trying to do ARA evaluations? Well, ARA was originally introduced primarily as a benchmark rather than a threat model. I.e. it’s something that roughly correlates with other threat models, but is easier and more concrete to measure. But, predictably, this distinction has been lost in translation. I’ve discussed this with Paul and he told me he regrets the extent to which people are treating ARA as a threat model in its own right.
Separately, I think the “natural selection favors AIs over humans” argument is a fairly weak one; you can find some comments I’ve made about this by searching my twitter.
Hmm, I agree that ARA is not that compelling on its own (as a threat model). However, it seems to me like ruling out ARA is a relatively naturally way to mostly rule out relatively direct danger. And, once you do have ARA ability, you just need some moderately potent self-improvement ability (including training successor models) for the situation to look reasonably scary. Further, it seems somewhat hard to do capabilities evaluations that rule out this self-improvement if models are ARA capable given that there are so possible routes.
Edit: TBC, I think it’s scary mostly because we might want to slow down and rogue AIs might then out pace AI labs.
So, I think I basically agree with where you’re at overall, but I’d go further than “it’s something that roughly correlates with other threat models, but is easier and more concrete to measure” and say “it’s a reasonable threshold to use to (somewhat) bound danger ” which seems worth noting.
While doing all that, in order to stay relevant, they’ll need to recursively self-improve at the same rate at which leading AI labs are making progress, but with far fewer computational resources
I agree this is probably an issue for the rogue AIs. But, we might want to retain the ability to slow down if misalignment seems to be a huge risk and rogue AIs could make this considerably harder. (The existance of serious rogue AIs is surely also correlated with misalignment being a big risk.)
While it’s hard to coordinate to slow down human AI development even if huge risks are clearly demonstrated, there are ways in which it could be particularly hard to prevent mostly autonomous AIs from self-improving. In particular, other AI projects require human employees which could make them easier to track and shutdown. Further, AI projects are generally limited by not having a vast supply of intellectual labor which would change in a regime where there rogue AIs with reasonable ML abilities.
This is mostly an argument that we should be very careful with AIs which have a reasonable chance of being capable of substantial self-improvement, but ARA feels quite related to me.
However, it seems to me like ruling out ARA is a relatively naturally way to mostly rule out relatively direct danger.
This is what I meant by “ARA as a benchmark”; maybe I should have described it as a proxy instead. Though while I agree that ARA rules out most danger, I think that’s because it’s just quite a low bar. The sort of tasks involved in buying compute etc are ones most humans could do. Meanwhile more plausible threat models involve expert-level or superhuman hacking. So I expect a significant gap between ARA and those threat models.
once you do have ARA ability, you just need some moderately potent self-improvement ability (including training successor models) for the situation to look reasonably scary
You’d need either really good ARA or really good self-improvement ability for an ARA agent to keep up with labs given the huge compute penalty they’ll face, unless there’s a big slowdown. And if we can coordinate on such a big slowdown, I expect we can also coordinate on massively throttling potential ARA agents.
Yep, I was imagining big slow down or uncertainty in how explosive AI R&D will be.
And if we can coordinate on such a big slowdown, I expect we can also coordinate on massively throttling potential ARA agents.
It agree that this cuts the risk, but I don’t think it fully eliminates it. I think it might be considerably harder to fully cut off ARA agents than to do a big slowdown.
One way to put this is that quite powerful AIs in the wild might make slowing down 20% less likely to work (conditional on a serious slow down effort) which seems like a decent cost to me.
Part of my view here is that ARA agents will have unique affordances that no human organization will have had before (like having truely vast, vast amounts of pretty high skill labor).
Of course, AI labs will also have access to vast amounts of high skill labor (from possibly misaligned AIs). I don’t know how the offense defense balance goes here. I think full defense might require doing really crazy things that organizations are unwilling to do. (E.g. unleashing a huge number of controlled AI agents which may commit crimes in order to take free energy.)
The sort of tasks involved in buying compute etc are ones most humans could do.
My guess is that you need to be a decent but not amazing software engineer to ARA. So, I wouldn’t describe this as “tasks most humans can do”.
Part of my view here is that ARA agents will have unique affordances that no human organization will have had before (like having truly vast, vast amounts of pretty high skill labor).
The more labor they have, the more detectable they are, and the easier they are to shut down. Also, are you picturing them gaining money from crimes, then buying compute legitimately? I think the “crimes” part is hard to stop but the “paying for compute” part is relatively easy to stop.
My guess is that you need to be a decent but not amazing software engineer to ARA.
Yeah, you’re probably right. I still stand by the overall point though.
I think in our current situation shutting down all rogue AI operations might be quite tough (though limiting their scale is certainly possible, and I agree with the critique that absent slowdowns or alignment problems or regulatory obstacles etc. it would be surprising if these rogue agents could compete with legitimate actors with many more GPUs).
Assuming the AI agents have money there are maybe three remaining constraints the AI agents need to deal with:
purchasing/renting the GPUs,
(if not rented through a cloud provider) setting them up and maintaining them, and
evading law enforcement or other groups trying to locate them and shut them down
For acquiring the GPUs, there are currently a ton of untracked GPUs spread out around the world, both gaming GPUs like 4090s and datacenter GPUs like H100s. I can buy or rent them with extremely minimal KYC from tons of different places. If we assume ARA agents are able to do all of the following:
Recruit people to physically buy GPUs from retail stores
Take delivery of online ordered GPUs in anonymized ways using intermediaries
Recruit people to set up fake AI startups and buy GPUs through them
then the effort required to prevent them from acquiring any GPUs in a world anything like today seems crazy. And even if the US + allies started confiscating or buying up 4090s en masse, it’s enough that some other countries adopt laxer standards (perhaps to attract AI talent that is frustrated by the new draconian GPU-control regime).
As for setting up acquired GPUs, I think the AI agents will probably be able to find colocation operators that don’t ask too many questions in many parts of the world, but even if the world was to coordinate on extremely stringent controls, the AI agents could set up datacenters of their own in random office buildings or similar—each inference setup wouldn’t need that much power and I think it would be very hard to track it down.
As for not being shut down by law enforcement, I think this might take some skill in cybersecurity and opsec etc. but if it operates as dozens or hundreds of separate “cells”, and each has enough labor to figure out decorrelated security and operations etc. then it seems quite plausible that they wouldn’t be shut down. Historically “insurgency groups” or hacker networks etc. seem to be very difficult to fully eliminate, even when a ton of resources are thrown at the problem.
I don’t think any of the above would require superhuman abilities, though many parts are challenging, which is part of why evals targeting these skills could provide a useful safety case—e.g. if it’s clear that the AI could not pull off the cybersecurity operation required to not be easily shut down then this is a fairly strong argument that this AI agent wouldn’t be able to pose a threat [Edit: from rogue agents operating independently, not including other threat models like sabotaging things from within the lab etc.].
Though again I am not defending any very strong claim here, e.g. I’m not saying:
that rogue AIs will be able to claim 5+% of all GPUs or an amount competitive with a well-resourced legitimate actor (I think the world could shut them out of most of the GPU supply, and that over time the situation would worsen for the AI agents as the production of more/better GPUs is handled with increasing care),
that these skills alone mean it poses a risk of takeover or that it could cause significant damage (I agree that this would likely require significant further capabilities, or many more resources, or already being deployed aggressively in key areas of the military etc.)
that “somewhat dumb AI agents self-replicate their way to a massive disaster” is a key threat model we should be focusing our energy on
I’m just defending the claim that ~human-level rogue AIs in a world similar to the world of today might be difficult to fully shut down, even if the world made a concerted effort to do so.
e.g. if it’s clear that the AI could not pull off the cybersecurity operation required to not be easily shut down then this is a fairly strong argument that this AI agent wouldn’t be able to pose a threat
I agree that this is a fairly strong argument that this AI agent wouldn’t be able be able cause problems while rogue. However, I think there is also a concern that this AI will be able to cause serious problems via the affordances it is granted through the AI lab.
In particular, AIs might be given huge affordances internally with minimal controls by default. And this might pose a substantial risk even if AIs aren’t quite capable enough to pull off the cybersecurity operation. (Though it seems not that bad of a risk.)
Yeah that’s right, I made too broad a claim and only meant to say it was an argument against their ability to pose a threat as rogue independent agents.
Also, are you picturing them gaining money from crimes, then buying compute legitimately? I think the “crimes” part is hard to stop but the “paying for compute” part is relatively easy to stop.
Both legitimately and illegitimately acquiring compute could plausibly be the best route. I’m uncertain.
It doesn’t seem that easy to lock down legitimate compute to me? I think you need to shutdown people buying/renting 8xH100 style boxes. This seems quite difficult potentially.
The model weights probably don’t fit in VRAM in a single 8xH100 device (the model weights might be 5 TB when 4 bit quantized), but you can maybe fit with 8-12 of these. And, you might not need amazing interconnect (e.g. normal 1 GB/s datacenter internet is fine) for somewhat high latency (but decently good throughput) pipeline parallel inference. (You only need to send the residual stream per token.) I might do a BOTEC on this later. Unless you’re aware of a BOTEC on this?
I did some BOTECs on this and think 1 GB/s is sort of borderline, probably works but not obviously.
E.g. I assumed a ~10TB at fp8 MoE model with a sparsity factor of 4 with 32768 hidden size.
With 32kB per token you could send at most 30k tokens/second over a 1GB/s interconnect. Not quite sure what a realistic utilization would be, but maybe we halve that to 15k?
If the model was split across 20 8xH100 boxes, then each box might do ~250 GFLOP/token (2 * 10T parameters / (4*20)), so each box would do at most 3.75 PFLOP/second, which might be about ~20-25% utilization.
This is not bad, but for a model with much more sparsity or GPUs with a different FLOP/s : VRAM ratio or spottier connection etc. the bandwidth constraint might become quite harsh.
(the above is somewhat hastily reconstructed from some old sheets, might have messed something up)
AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them down
Why do you think this? What is the general story you’re expecting?
I think it’s plausible that humanity takes a very cautious response to AI autonomy, including hunting and shutting down all autonomous AIs — but I don’t think the arguments I’m considering justify more than like 70% confidence (I think I’m somewhere around 60%). Some arguments pointing toward “maybe we won’t respond sensibly to ARA”:
There are not known-to-me laws prohibiting autonomous AIs from existing (assuming they’re otherwise following laws), in any jurisdiction.
Properly dealing with ARA is a global problem, requiring either buy-in from dozens of countries, or somebody to carry out cyber-offensive operations in foreign countries, in order to shut down ARA models. We see precedence for this kind of international action w.r.t. WMD threats like US/Israel’s attacks on Iran’s nuclear program, and I expect there’s a lot of tit-for-tat going on in the nation state hacking world, but it’s not obvious that autonomous AIs would rise to a threat level that warrants this.
It’s not clear to me that the public cares about autonomous AIs existing in many domains (at least in many domains; there are some domains like dating where people have a real ick). I think if we got credible evidence that Mark Zuckerberg was a lizard or a robot, few people would stop using Facebook products as a result. Many people seem to think various tech CEOs like Elon Musk and Jeff Bezos are terrible, yet still use their products.
A lot of this seems like it depends on whether autonomous AIs actually cause any serious harm. I can definitely imagine a world with autonomous AIs running around like small companies and twitter being filled with “but show me the empirical evidence for risk, all you safety-ists have is your theoretical arguments which haven’t held up, and we have tons of historical evidence of small companies not causing catastrophic harm”. And indeed, I don’t really expect the conceptual arguments for risk from roughly human level autonomous AIs are likely to convince enough of the public + policy makers that they need to take drastic actions to limit autonomous AIs; I definitely wouldn’t be highly confident that will will respond appropriately in the absence of serious harm. If the autonomous AIs are basically minding their own business, I’m not sure there will be major effort to limit them.
If AI labs are slamming on the recursive self improvement ASAP, it may be that Autonomous Replicating Agents are irrelevant. But that’s a “ARA can’t destroy the world if AI labs do it first” argument.
ARA may well have more compute than AI labs. Especially if the AI labs are trying to stay within the law, and the ARA is stealing any money/compute that it can hack it’s way into. (Which could be >90% of the internet if it’s good at hacking. )
there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement).
Ok. That’s a world model in which humans are being INCREDIBLY stupid.
If we want to actually win, we need to both be careful about deploying those other misaligned models, and stop ARA.
Alice: That snake bite looks pretty nasty, it could kill you if you don’t get it treated.
Bob: That snake bite won’t kill me, this hand grenade will. Pulls out pin.
Thanks for this comment, but I think this might be a bit overconfident.
constantly fighting off the mitigations that humans are using to try to detect them and shut them down.
Yes, I have no doubt that if humans implement some kind of defense, this will slow down ARA a lot. But:
1) It’s not even clear people are going to try to react in the first place. As I say, most AI development is positive. If you implement regulations to fight bad ARA, you are also hindering the whole ecosystem. It’s not clear to me that we are going to do something about open source. You need a big warning shot beforehand and this is not really clear to me that this happens before a catastrophic level. It’s clear they’re going to react to some kind of ARAs (like chaosgpt), but there might be some ARAs they won’t react to at all.
2) it’s not clear this defense (say for example Know Your Customer for providers) is going to be sufficiently effective to completely clean the whole mess. if the AI is able to hide successfully on laptops + cooperate with some humans, this is going to be really hard to shut it down. We have to live with this endemic virus. The only way around this is cleaning the virus with some sort of pivotal act, but I really don’t like that.
While doing all that, in order to stay relevant, they’ll need to recursively self-improve at the same rate at which leading AI labs are making progress, but with far fewer computational resources.
“at the same rate” not necessarily. If we don’t solve alignment and we implement a pause on AI development in labs, the ARA AI may still continue to develop. The real crux is how much time the ARA AI needs to evolve into something scary.
Superintelligences could do all of this, and ARA of superintelligences would be pretty terrible. But for models in the broad human or slightly-superhuman ballpark, ARA seems overrated, compared with threat models that involve subverting key human institutions.
We don’t learn much here. From my side, I think that superintelligence is not going to be neglected, and big labs are taking this seriously already. I’m still not clear on ARA.
Remember, while the ARA models are trying to survive, there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement). These seem much more concerning.
This is not the central point. The central point is:
At some point, ARA is unshutdownable unless you try hard with a pivotal cleaning act. We may be stuck with a ChaosGPT forever, which is not existential, but pretty annoying. People are going to die.
the ARA evolves over time. Maybe this evolution is very slow, maybe fast. Maybe it plateaus, maybe it does not plateau. I don’t know
This may take an indefinite number of years, but this can be a problem
the “natural selection favors AIs over humans” argument is a fairly weak one; you can find some comments I’ve made about this by searching my twitter.
I’m pretty surprised by this. I’ve tried to google and not found anything.
Overall, I think this still deserves more research
1) It’s not even clear people are going to try to react in the first place.
I think this just depends a lot on how large-scale they are. If they are using millions of dollars of compute, and are effectively large-scale criminal organizations, then there are many different avenues by which they might get detected and suppressed.
If we don’t solve alignment and we implement a pause on AI development in labs, the ARA AI may still continue to develop.
A world which can pause AI development is one which can also easily throttle ARA AIs.
The central point is:
At some point, ARA is unshutdownable unless you try hard with a pivotal cleaning act. We may be stuck with a ChaosGPT forever, which is not existential, but pretty annoying. People are going to die.
the ARA evolves over time. Maybe this evolution is very slow, maybe fast. Maybe it plateaus, maybe it does not plateau. I don’t know
This may take an indefinite number of years, but this can be a problem
This seems like a weak central point. “Pretty annoying” and some people dying is just incredibly small compared with the benefits of AI. And “it might be a problem in an indefinite number of years” doesn’t justify the strength of the claims you’re making in this post, like “we are approaching a point of no return” and “without a treaty, we are screwed”.
An extended analogy: suppose the US and China both think it might be possible to invent a new weapon far more destructive than nuclear weapons, and they’re both worried that the other side will invent it first. Worrying about ARAs feels like worrying about North Korea’s weapons program. It could be a problem in some possible worlds, but it is always going to be much smaller, it will increasingly be left behind as the others progress, and if there’s enough political will to solve the main problem (US and China racing) then you can also easily solve the side problem (e.g. by China putting pressure on North Korea to stop).
you can find some comments I’ve made about this by searching my twitter
Link here, and there are other comments in the same thread. Was on my laptop, which has twitter blocked, so couldn’t link it myself before.
doesn’t justify the strength of the claims you’re making in this post, like “we are approaching a point of no return” and “without a treaty, we are screwed”.
I agree that’s a bit too much, but it seems to me that we’re not at all on the way to stopping open source development, and that we need to stop it at some point; maybe you think ARA is a bit early, but I think we need a red line before AI becomes human-level, and ARA is one of the last arbitrary red lines before everything accelerates.
But I still think no return to loss of control because it might be very hard to stop ARA agent still seems pretty fair to me.
Link here, and there are other comments in the same thread. Was on my laptop, which has twitter blocked, so couldn’t link it myself before.
I agree with your comment on twitter that evolutionary forces are very slow compared to deliberate design, but that is not way I wanted to convey (that’s my fault). I think an ARA agent would not only depend on evolutionary forces, but also on the whole open source community finding new ways to quantify, prune, distill, and run the model in a distributed way in a practical way. I think the main driver this “evolution” would be the open source community & libraries who will want to create good “ARA”, and huge economic incentive will make agent AIs more and more common and easy in the future.
they’ll need to recursively self-improve at the same rate at which leading AI labs are making progress
But if they have good hacking abilities, couldn’t they just steal the progress of leading AI labs? In that case they don’t have to have the ability to self-improve.
Remember, while the ARA models are trying to survive, there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement). These seem much more concerning.
So my main reason for worry personally is that there might be an ARA that is deployed with just the goal of “just clone yourself as much as possible” or a goal similar to this. In this case, the AI does not really have to survive particularly among others as long as it is able to pay for itself and continue spreading, or infecting local computers by copying itself to local computers etc… This is a worrisome scenario in that the AI might just be dormant, and already hard to kill. If the AI furthermore has some ability for avoiding detection and adapting/modifying itself, then I really worry that its goals are going to evolve and get selected to converge progressively toward a full takeover (though it may also plateau) and that we will be completely oblivious to it for most of this time, as there won’t really be any incentive to either detect or fight this thoroughly.
Of course, there is also the scenario of an chaos-GPT ARA agent, and this I worry is going to kill many people without it ever truly shutting down, or if we can, it might take a while.
All in all, I think this is more of a question of costs-benefits than if how likely it is. For instance, I think that implementing Know Your Customer policy on all providers right now could be quite feasible and would slow down the initial steps of an ARA agent a lot.
I feel like the main crux of the argument is:
Whether an ARA agent plateaus or goes exponential in terms of abilities and takeover goals.
How much time it will take for an ARA agent that takes over to fully take over after being released.
I am still very unsure about 1, I could imagine many scenarios where the endemic ARA just stagnates and never really transforms into something more.
However, I feel like for 2. you have a model that it is going to take a long time for such an agent to really take over. I am unsure about that, but even if that were the case, my main concern is that once the seed ARA is released (and it might be only very slightly capable of adaptation at first), the n it is going to be extremely difficult to shut it down. If AI labs advance significantly with respect to superintelligence, implementing pause AI might not be too late, but if such an agent has already been released there is not going to be much we can do about it.
Separately, I think the “natural selection favors AIs over humans” argument is a fairly weak one; you can find some comments I’ve made about this by searching my twitter.
I would be very interested to hear more. I didn’t find anything from a quick search on your twitter, do you have a link or a pointer I could read more on for counterarguments about “natural selection favors AIs over humans”?
I think the opposite: ARA is just not a very compelling threat model in my mind. The key issue is that AIs that do ARA will need to be operating at the fringes of human society, constantly fighting off the mitigations that humans are using to try to detect them and shut them down. While doing all that, in order to stay relevant, they’ll need to recursively self-improve at the same rate at which leading AI labs are making progress, but with far fewer computational resources. Meanwhile, if they grow large enough to be spending serious amounts of money, they’ll need to somehow fool standard law enforcement and general societal scrutiny.
Superintelligences could do all of this, and ARA of superintelligences would be pretty terrible. But for models in the broad human or slightly-superhuman ballpark, ARA seems overrated, compared with threat models that involve subverting key human institutions. Remember, while the ARA models are trying to survive, there will be millions of other (potentially misaligned) models being deployed deliberately by humans, including on very sensitive tasks (like recursive self-improvement). These seem much more concerning.
Why then are people trying to do ARA evaluations? Well, ARA was originally introduced primarily as a benchmark rather than a threat model. I.e. it’s something that roughly correlates with other threat models, but is easier and more concrete to measure. But, predictably, this distinction has been lost in translation. I’ve discussed this with Paul and he told me he regrets the extent to which people are treating ARA as a threat model in its own right.
Separately, I think the “natural selection favors AIs over humans” argument is a fairly weak one; you can find some comments I’ve made about this by searching my twitter.
Hmm, I agree that ARA is not that compelling on its own (as a threat model). However, it seems to me like ruling out ARA is a relatively naturally way to mostly rule out relatively direct danger. And, once you do have ARA ability, you just need some moderately potent self-improvement ability (including training successor models) for the situation to look reasonably scary. Further, it seems somewhat hard to do capabilities evaluations that rule out this self-improvement if models are ARA capable given that there are so possible routes.
Edit: TBC, I think it’s scary mostly because we might want to slow down and rogue AIs might then out pace AI labs.
So, I think I basically agree with where you’re at overall, but I’d go further than “it’s something that roughly correlates with other threat models, but is easier and more concrete to measure” and say “it’s a reasonable threshold to use to (somewhat) bound danger ” which seems worth noting.
I agree this is probably an issue for the rogue AIs. But, we might want to retain the ability to slow down if misalignment seems to be a huge risk and rogue AIs could make this considerably harder. (The existance of serious rogue AIs is surely also correlated with misalignment being a big risk.)
While it’s hard to coordinate to slow down human AI development even if huge risks are clearly demonstrated, there are ways in which it could be particularly hard to prevent mostly autonomous AIs from self-improving. In particular, other AI projects require human employees which could make them easier to track and shutdown. Further, AI projects are generally limited by not having a vast supply of intellectual labor which would change in a regime where there rogue AIs with reasonable ML abilities.
This is mostly an argument that we should be very careful with AIs which have a reasonable chance of being capable of substantial self-improvement, but ARA feels quite related to me.
This is what I meant by “ARA as a benchmark”; maybe I should have described it as a proxy instead. Though while I agree that ARA rules out most danger, I think that’s because it’s just quite a low bar. The sort of tasks involved in buying compute etc are ones most humans could do. Meanwhile more plausible threat models involve expert-level or superhuman hacking. So I expect a significant gap between ARA and those threat models.
You’d need either really good ARA or really good self-improvement ability for an ARA agent to keep up with labs given the huge compute penalty they’ll face, unless there’s a big slowdown. And if we can coordinate on such a big slowdown, I expect we can also coordinate on massively throttling potential ARA agents.
Yep, I was imagining big slow down or uncertainty in how explosive AI R&D will be.
It agree that this cuts the risk, but I don’t think it fully eliminates it. I think it might be considerably harder to fully cut off ARA agents than to do a big slowdown.
One way to put this is that quite powerful AIs in the wild might make slowing down 20% less likely to work (conditional on a serious slow down effort) which seems like a decent cost to me.
Part of my view here is that ARA agents will have unique affordances that no human organization will have had before (like having truely vast, vast amounts of pretty high skill labor).
Of course, AI labs will also have access to vast amounts of high skill labor (from possibly misaligned AIs). I don’t know how the offense defense balance goes here. I think full defense might require doing really crazy things that organizations are unwilling to do. (E.g. unleashing a huge number of controlled AI agents which may commit crimes in order to take free energy.)
My guess is that you need to be a decent but not amazing software engineer to ARA. So, I wouldn’t describe this as “tasks most humans can do”.
The more labor they have, the more detectable they are, and the easier they are to shut down. Also, are you picturing them gaining money from crimes, then buying compute legitimately? I think the “crimes” part is hard to stop but the “paying for compute” part is relatively easy to stop.
Yeah, you’re probably right. I still stand by the overall point though.
I think in our current situation shutting down all rogue AI operations might be quite tough (though limiting their scale is certainly possible, and I agree with the critique that absent slowdowns or alignment problems or regulatory obstacles etc. it would be surprising if these rogue agents could compete with legitimate actors with many more GPUs).
Assuming the AI agents have money there are maybe three remaining constraints the AI agents need to deal with:
purchasing/renting the GPUs,
(if not rented through a cloud provider) setting them up and maintaining them, and
evading law enforcement or other groups trying to locate them and shut them down
For acquiring the GPUs, there are currently a ton of untracked GPUs spread out around the world, both gaming GPUs like 4090s and datacenter GPUs like H100s. I can buy or rent them with extremely minimal KYC from tons of different places. If we assume ARA agents are able to do all of the following:
Recruit people to physically buy GPUs from retail stores
Take delivery of online ordered GPUs in anonymized ways using intermediaries
Recruit people to set up fake AI startups and buy GPUs through them
then the effort required to prevent them from acquiring any GPUs in a world anything like today seems crazy. And even if the US + allies started confiscating or buying up 4090s en masse, it’s enough that some other countries adopt laxer standards (perhaps to attract AI talent that is frustrated by the new draconian GPU-control regime).
As for setting up acquired GPUs, I think the AI agents will probably be able to find colocation operators that don’t ask too many questions in many parts of the world, but even if the world was to coordinate on extremely stringent controls, the AI agents could set up datacenters of their own in random office buildings or similar—each inference setup wouldn’t need that much power and I think it would be very hard to track it down.
As for not being shut down by law enforcement, I think this might take some skill in cybersecurity and opsec etc. but if it operates as dozens or hundreds of separate “cells”, and each has enough labor to figure out decorrelated security and operations etc. then it seems quite plausible that they wouldn’t be shut down. Historically “insurgency groups” or hacker networks etc. seem to be very difficult to fully eliminate, even when a ton of resources are thrown at the problem.
I don’t think any of the above would require superhuman abilities, though many parts are challenging, which is part of why evals targeting these skills could provide a useful safety case—e.g. if it’s clear that the AI could not pull off the cybersecurity operation required to not be easily shut down then this is a fairly strong argument that this AI agent wouldn’t be able to pose a threat [Edit: from rogue agents operating independently, not including other threat models like sabotaging things from within the lab etc.].
Though again I am not defending any very strong claim here, e.g. I’m not saying:
that rogue AIs will be able to claim 5+% of all GPUs or an amount competitive with a well-resourced legitimate actor (I think the world could shut them out of most of the GPU supply, and that over time the situation would worsen for the AI agents as the production of more/better GPUs is handled with increasing care),
that these skills alone mean it poses a risk of takeover or that it could cause significant damage (I agree that this would likely require significant further capabilities, or many more resources, or already being deployed aggressively in key areas of the military etc.)
that “somewhat dumb AI agents self-replicate their way to a massive disaster” is a key threat model we should be focusing our energy on
I’m just defending the claim that ~human-level rogue AIs in a world similar to the world of today might be difficult to fully shut down, even if the world made a concerted effort to do so.
I agree that this is a fairly strong argument that this AI agent wouldn’t be able be able cause problems while rogue. However, I think there is also a concern that this AI will be able to cause serious problems via the affordances it is granted through the AI lab.
In particular, AIs might be given huge affordances internally with minimal controls by default. And this might pose a substantial risk even if AIs aren’t quite capable enough to pull off the cybersecurity operation. (Though it seems not that bad of a risk.)
Yeah that’s right, I made too broad a claim and only meant to say it was an argument against their ability to pose a threat as rogue independent agents.
Both legitimately and illegitimately acquiring compute could plausibly be the best route. I’m uncertain.
It doesn’t seem that easy to lock down legitimate compute to me? I think you need to shutdown people buying/renting 8xH100 style boxes. This seems quite difficult potentially.
The model weights probably don’t fit in VRAM in a single 8xH100 device (the model weights might be 5 TB when 4 bit quantized), but you can maybe fit with 8-12 of these. And, you might not need amazing interconnect (e.g. normal 1 GB/s datacenter internet is fine) for somewhat high latency (but decently good throughput) pipeline parallel inference. (You only need to send the residual stream per token.) I might do a BOTEC on this later. Unless you’re aware of a BOTEC on this?
I did some BOTECs on this and think 1 GB/s is sort of borderline, probably works but not obviously.
E.g. I assumed a ~10TB at fp8 MoE model with a sparsity factor of 4 with 32768 hidden size.
With 32kB per token you could send at most 30k tokens/second over a 1GB/s interconnect. Not quite sure what a realistic utilization would be, but maybe we halve that to 15k?
If the model was split across 20 8xH100 boxes, then each box might do ~250 GFLOP/token (2 * 10T parameters / (4*20)), so each box would do at most 3.75 PFLOP/second, which might be about ~20-25% utilization.
This is not bad, but for a model with much more sparsity or GPUs with a different FLOP/s : VRAM ratio or spottier connection etc. the bandwidth constraint might become quite harsh.
(the above is somewhat hastily reconstructed from some old sheets, might have messed something up)
for reference, just last week i rented 3 8xh100 boxes without any KYC
Why do you think this? What is the general story you’re expecting?
I think it’s plausible that humanity takes a very cautious response to AI autonomy, including hunting and shutting down all autonomous AIs — but I don’t think the arguments I’m considering justify more than like 70% confidence (I think I’m somewhere around 60%). Some arguments pointing toward “maybe we won’t respond sensibly to ARA”:
There are not known-to-me laws prohibiting autonomous AIs from existing (assuming they’re otherwise following laws), in any jurisdiction.
Properly dealing with ARA is a global problem, requiring either buy-in from dozens of countries, or somebody to carry out cyber-offensive operations in foreign countries, in order to shut down ARA models. We see precedence for this kind of international action w.r.t. WMD threats like US/Israel’s attacks on Iran’s nuclear program, and I expect there’s a lot of tit-for-tat going on in the nation state hacking world, but it’s not obvious that autonomous AIs would rise to a threat level that warrants this.
It’s not clear to me that the public cares about autonomous AIs existing in many domains (at least in many domains; there are some domains like dating where people have a real ick). I think if we got credible evidence that Mark Zuckerberg was a lizard or a robot, few people would stop using Facebook products as a result. Many people seem to think various tech CEOs like Elon Musk and Jeff Bezos are terrible, yet still use their products.
A lot of this seems like it depends on whether autonomous AIs actually cause any serious harm. I can definitely imagine a world with autonomous AIs running around like small companies and twitter being filled with “but show me the empirical evidence for risk, all you safety-ists have is your theoretical arguments which haven’t held up, and we have tons of historical evidence of small companies not causing catastrophic harm”. And indeed, I don’t really expect the conceptual arguments for risk from roughly human level autonomous AIs are likely to convince enough of the public + policy makers that they need to take drastic actions to limit autonomous AIs; I definitely wouldn’t be highly confident that will will respond appropriately in the absence of serious harm. If the autonomous AIs are basically minding their own business, I’m not sure there will be major effort to limit them.
If AI labs are slamming on the recursive self improvement ASAP, it may be that Autonomous Replicating Agents are irrelevant. But that’s a “ARA can’t destroy the world if AI labs do it first” argument.
ARA may well have more compute than AI labs. Especially if the AI labs are trying to stay within the law, and the ARA is stealing any money/compute that it can hack it’s way into. (Which could be >90% of the internet if it’s good at hacking. )
Ok. That’s a world model in which humans are being INCREDIBLY stupid.
If we want to actually win, we need to both be careful about deploying those other misaligned models, and stop ARA.
Alice: That snake bite looks pretty nasty, it could kill you if you don’t get it treated.
Bob: That snake bite won’t kill me, this hand grenade will. Pulls out pin.
Thanks for this comment, but I think this might be a bit overconfident.
Yes, I have no doubt that if humans implement some kind of defense, this will slow down ARA a lot. But:
1) It’s not even clear people are going to try to react in the first place. As I say, most AI development is positive. If you implement regulations to fight bad ARA, you are also hindering the whole ecosystem. It’s not clear to me that we are going to do something about open source. You need a big warning shot beforehand and this is not really clear to me that this happens before a catastrophic level. It’s clear they’re going to react to some kind of ARAs (like chaosgpt), but there might be some ARAs they won’t react to at all.
2) it’s not clear this defense (say for example Know Your Customer for providers) is going to be sufficiently effective to completely clean the whole mess. if the AI is able to hide successfully on laptops + cooperate with some humans, this is going to be really hard to shut it down. We have to live with this endemic virus. The only way around this is cleaning the virus with some sort of pivotal act, but I really don’t like that.
“at the same rate” not necessarily. If we don’t solve alignment and we implement a pause on AI development in labs, the ARA AI may still continue to develop. The real crux is how much time the ARA AI needs to evolve into something scary.
We don’t learn much here. From my side, I think that superintelligence is not going to be neglected, and big labs are taking this seriously already. I’m still not clear on ARA.
This is not the central point. The central point is:
At some point, ARA is unshutdownable unless you try hard with a pivotal cleaning act. We may be stuck with a ChaosGPT forever, which is not existential, but pretty annoying. People are going to die.
the ARA evolves over time. Maybe this evolution is very slow, maybe fast. Maybe it plateaus, maybe it does not plateau. I don’t know
This may take an indefinite number of years, but this can be a problem
I’m pretty surprised by this. I’ve tried to google and not found anything.
Overall, I think this still deserves more research
I think this just depends a lot on how large-scale they are. If they are using millions of dollars of compute, and are effectively large-scale criminal organizations, then there are many different avenues by which they might get detected and suppressed.
A world which can pause AI development is one which can also easily throttle ARA AIs.
This seems like a weak central point. “Pretty annoying” and some people dying is just incredibly small compared with the benefits of AI. And “it might be a problem in an indefinite number of years” doesn’t justify the strength of the claims you’re making in this post, like “we are approaching a point of no return” and “without a treaty, we are screwed”.
An extended analogy: suppose the US and China both think it might be possible to invent a new weapon far more destructive than nuclear weapons, and they’re both worried that the other side will invent it first. Worrying about ARAs feels like worrying about North Korea’s weapons program. It could be a problem in some possible worlds, but it is always going to be much smaller, it will increasingly be left behind as the others progress, and if there’s enough political will to solve the main problem (US and China racing) then you can also easily solve the side problem (e.g. by China putting pressure on North Korea to stop).
Link here, and there are other comments in the same thread. Was on my laptop, which has twitter blocked, so couldn’t link it myself before.
I push back on this somewhat in a discussion thread here. (As a pointer to people reading through.)
Overall, I think this is likely to be true (maybe 60% likely), but not enough that we should feel totally fine about the situation.
I agree that’s a bit too much, but it seems to me that we’re not at all on the way to stopping open source development, and that we need to stop it at some point; maybe you think ARA is a bit early, but I think we need a red line before AI becomes human-level, and ARA is one of the last arbitrary red lines before everything accelerates.
But I still think no return to loss of control because it might be very hard to stop ARA agent still seems pretty fair to me.
I agree with your comment on twitter that evolutionary forces are very slow compared to deliberate design, but that is not way I wanted to convey (that’s my fault). I think an ARA agent would not only depend on evolutionary forces, but also on the whole open source community finding new ways to quantify, prune, distill, and run the model in a distributed way in a practical way. I think the main driver this “evolution” would be the open source community & libraries who will want to create good “ARA”, and huge economic incentive will make agent AIs more and more common and easy in the future.
But if they have good hacking abilities, couldn’t they just steal the progress of leading AI labs? In that case they don’t have to have the ability to self-improve.
So my main reason for worry personally is that there might be an ARA that is deployed with just the goal of “just clone yourself as much as possible” or a goal similar to this. In this case, the AI does not really have to survive particularly among others as long as it is able to pay for itself and continue spreading, or infecting local computers by copying itself to local computers etc… This is a worrisome scenario in that the AI might just be dormant, and already hard to kill. If the AI furthermore has some ability for avoiding detection and adapting/modifying itself, then I really worry that its goals are going to evolve and get selected to converge progressively toward a full takeover (though it may also plateau) and that we will be completely oblivious to it for most of this time, as there won’t really be any incentive to either detect or fight this thoroughly.
Of course, there is also the scenario of an chaos-GPT ARA agent, and this I worry is going to kill many people without it ever truly shutting down, or if we can, it might take a while.
All in all, I think this is more of a question of costs-benefits than if how likely it is. For instance, I think that implementing Know Your Customer policy on all providers right now could be quite feasible and would slow down the initial steps of an ARA agent a lot.
I feel like the main crux of the argument is:
Whether an ARA agent plateaus or goes exponential in terms of abilities and takeover goals.
How much time it will take for an ARA agent that takes over to fully take over after being released.
I am still very unsure about 1, I could imagine many scenarios where the endemic ARA just stagnates and never really transforms into something more.
However, I feel like for 2. you have a model that it is going to take a long time for such an agent to really take over. I am unsure about that, but even if that were the case, my main concern is that once the seed ARA is released (and it might be only very slightly capable of adaptation at first), the n it is going to be extremely difficult to shut it down. If AI labs advance significantly with respect to superintelligence, implementing pause AI might not be too late, but if such an agent has already been released there is not going to be much we can do about it.
I would be very interested to hear more. I didn’t find anything from a quick search on your twitter, do you have a link or a pointer I could read more on for counterarguments about “natural selection favors AIs over humans”?