That’s only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from “it’s possible to build superintelligence with a huge multi-billion-dollar project” and “it’s possible to build superintelligence on a few consumer GPUs”. (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it’s moot.)
Sadly, no. It doesn’t take superintelligence to be deadly. Even current open-weight LLMs, like Llama 3 70B, know quite a lot about genetic engineering. The combination of a clever and malicious human, and an LLM able to offer help and advice is sufficient.
Furthermore, there is the consideration of “seed AI” which is competent enough to improve and not plateau. If you have a competent human helping it and getting it unstuck, then the bar is even lower. My prediction is that the bar for “seed AI” is lower than the bar for AGI.
That’s only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from “it’s possible to build superintelligence with a huge multi-billion-dollar project” and “it’s possible to build superintelligence on a few consumer GPUs”. (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it’s moot.)
Sadly, no. It doesn’t take superintelligence to be deadly. Even current open-weight LLMs, like Llama 3 70B, know quite a lot about genetic engineering. The combination of a clever and malicious human, and an LLM able to offer help and advice is sufficient.
Furthermore, there is the consideration of “seed AI” which is competent enough to improve and not plateau. If you have a competent human helping it and getting it unstuck, then the bar is even lower. My prediction is that the bar for “seed AI” is lower than the bar for AGI.