You can look to the literature to see the state of the art: using large clusters for long-running analysis on small code bases or isolated sections of a library.
Large clusters like… the ones that an AI would be running on?
They do not and cannot with available resources scale up to large scale analysis of an entire OS or network stack … if they did, we humans would have done that already.
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?
Large clusters like… the ones that an AI would be running on?
They don’t have to scale although that may be possible given increases in computing power (you only need to find an exploit somewhere, not all exploits everywhere), and I am skeptical we humans would, in fact, ‘have done that already’. That claim seems to prove way too much: are existing static code analysis tools applied everywhere? Are existing fuzzers applied everywhere?