Ugh. I like your analysis a lot, but feel like it’s worthless because your Overton window is too narrow.
I write about this in more depth elsewhere, but I’ll give a brief overview here.
Misuse risk is already very high. Even open-weights models, when fine-tuned on bioweapons-relevant papers, can produce thorough and accurate plans for insanely deadly bioweapons. With surprisingly few resources, a bad actor could wipe out the majority of humanity within a few months of release.
Global Moratorium on AGI research is nearly impossible. It would require government monitoring and control of personal computers worldwide. Stopping big training runs doesn’t help, maybe buys you one or two years of delay at best. Once more efficient algorithms are discovered, personal computers will be able to create dangerous AI. Scaling is the fastest route to more powerful AI, not the only route.
Further discussion
AGI could be here as soon as 2025, which isn’t even on your plot.
Once we get close enough to AGI for recursive self-improvement (an easier threshold to hit) then rapid algorithmic improvements make it cheap, quick and easy to create AGI. Furthermore, advancement towards ASI will likely be rapid and easy because the AGI will be able to do the work.
https://manifold.markets/MaxHarms/will-ai-be-recursively-self-improvi
AI may soon be given traits that make it self-aware/self-modeling, internal-goal-driven, etc. In other words, conscious. And someone will probably set it to the task of self-improving. Nobody has to let it out of the box if its creator never puts it in a box to begin with. Nobody needs to steal the code and weights if they are published publicly. Any coherent plan must address what will be done about such models, able to run on personal computers, with open code and open weights, with tremendous capability for harm if programmed/convinced to harm humanity.
Ugh. I like your analysis a lot, but feel like it’s worthless because your Overton window is too narrow.
I write about this in more depth elsewhere, but I’ll give a brief overview here.
Misuse risk is already very high. Even open-weights models, when fine-tuned on bioweapons-relevant papers, can produce thorough and accurate plans for insanely deadly bioweapons. With surprisingly few resources, a bad actor could wipe out the majority of humanity within a few months of release.
Global Moratorium on AGI research is nearly impossible. It would require government monitoring and control of personal computers worldwide. Stopping big training runs doesn’t help, maybe buys you one or two years of delay at best. Once more efficient algorithms are discovered, personal computers will be able to create dangerous AI. Scaling is the fastest route to more powerful AI, not the only route. Further discussion
AGI could be here as soon as 2025, which isn’t even on your plot.
Once we get close enough to AGI for recursive self-improvement (an easier threshold to hit) then rapid algorithmic improvements make it cheap, quick and easy to create AGI. Furthermore, advancement towards ASI will likely be rapid and easy because the AGI will be able to do the work. https://manifold.markets/MaxHarms/will-ai-be-recursively-self-improvi
AI may soon be given traits that make it self-aware/self-modeling, internal-goal-driven, etc. In other words, conscious. And someone will probably set it to the task of self-improving. Nobody has to let it out of the box if its creator never puts it in a box to begin with. Nobody needs to steal the code and weights if they are published publicly. Any coherent plan must address what will be done about such models, able to run on personal computers, with open code and open weights, with tremendous capability for harm if programmed/convinced to harm humanity.
https://www.lesswrong.com/posts/NRZfxAJztvx2ES5LG/a-path-to-human-autonomy