Computer security is an endless game of cats and mouse. Here you showed a URL pointing to something the cat knows. But there are plenty of literature/db on what the mouse have learned throughout the years.
If an AI became somehow self aware and had access to the knowledge of all the previous mice and cats, I wouldn’t be surprised if it could break free. But that’s a big if.
I agree with the cat and mouse metaphor and that we should assume an AI to be hyper competent.
At the same time, it will be restricted to operating within the constraints of the systems in can influence. My main point, which I admit was poorly made, is that cross site scripting attacks can be covered with a small investment, which eliminates clever java script as a possible attack vector. I would place lower probability on this being the way an AI escapes.
I would place higher probability on an AI exploiting a memory buffering type error similar to the one you referenced. Furthermore I would expect it to be in software it is running on top of and can easily experiment/iterate on. (OS, container, whatever) Whereas browser interactions are limited in iteration by the number of times a user calls the service, one would expect the local software can by manipulated and experimented with constantly and only be constrained by the CPU /IO resources available.
Computer security is an endless game of cats and mouse. Here you showed a URL pointing to something the cat knows. But there are plenty of literature/db on what the mouse have learned throughout the years.
If an AI became somehow self aware and had access to the knowledge of all the previous mice and cats, I wouldn’t be surprised if it could break free. But that’s a big if.
I agree with the cat and mouse metaphor and that we should assume an AI to be hyper competent.
At the same time, it will be restricted to operating within the constraints of the systems in can influence. My main point, which I admit was poorly made, is that cross site scripting attacks can be covered with a small investment, which eliminates clever java script as a possible attack vector. I would place lower probability on this being the way an AI escapes.
I would place higher probability on an AI exploiting a memory buffering type error similar to the one you referenced. Furthermore I would expect it to be in software it is running on top of and can easily experiment/iterate on. (OS, container, whatever) Whereas browser interactions are limited in iteration by the number of times a user calls the service, one would expect the local software can by manipulated and experimented with constantly and only be constrained by the CPU /IO resources available.