Packaging an LLM as payload for an unintelligent computer virus could also be called “self-replicating AI”, so it can’t be a qualitative “red line”. Instead the payload AI needs to be enough of a problem, or needs to be sufficiently better than a regular computer virus at spreading, or have a sufficient propensity to manifest an effective virus wrapper with less human effort towards creating it. Smarter LLMs (that can’t yet develop ground-breaking novel software) are also currently handicapped by needing to run in a datacenter, which could be a less well-known environment that’s harder to hide in, while a regular computer virus can live in personal computers.
I mostly agree, although I would also accept as “successfully self-replicating” either being sneaky enough (like your example of computer virus), or self-sufficient such that it can earn enough resources and spend the resources to acquire sufficient additional compute to create a copy of itself (and then do so).
So, yeah, not quite at the red line point in my books. But not so far off!
I wouldn’t find this particularly alarming in itself though, since I think “barely over the line of able to sustain itself and replicate” is still quite a ways short of being dangerous or leading to an AI population explosion.
Packaging an LLM as payload for an unintelligent computer virus could also be called “self-replicating AI”, so it can’t be a qualitative “red line”. Instead the payload AI needs to be enough of a problem, or needs to be sufficiently better than a regular computer virus at spreading, or have a sufficient propensity to manifest an effective virus wrapper with less human effort towards creating it. Smarter LLMs (that can’t yet develop ground-breaking novel software) are also currently handicapped by needing to run in a datacenter, which could be a less well-known environment that’s harder to hide in, while a regular computer virus can live in personal computers.
I mostly agree, although I would also accept as “successfully self-replicating” either being sneaky enough (like your example of computer virus), or self-sufficient such that it can earn enough resources and spend the resources to acquire sufficient additional compute to create a copy of itself (and then do so).
So, yeah, not quite at the red line point in my books. But not so far off!
I wouldn’t find this particularly alarming in itself though, since I think “barely over the line of able to sustain itself and replicate” is still quite a ways short of being dangerous or leading to an AI population explosion.