In December 2022, awash in recent AI achievements, it concerned me that much of the technology had become very synergistic during the previous couple of years. Essentially: AI-type-X (e.g. Stable Diffusion) can help improve AI-type-Y (e.g. Tesla self-driving) across many, many pairs of X and Y. And now, not even 4 months after that, we have papers released on GPT4′s ability to self-reflect and self-improve. Given that it is widely known how badly human minds predict geometric progression, I have started to feel like we are already past the AI singularity “event horizon.” Even slamming on the brakes now doesn’t seem like it will do much to stop our fall into this abyss (not to mention how unaligned the incentives of Microsoft, Tesla, and Google are from pulling the train brake). My imaginary “event horizon” was always “self-improvement” given that transistorized neurons would behave so much faster than chemical ones. Well, here we are. We’ve had dozens of emergent properties of AI over the past year, and barely anyone knows that it can use tools, learned to read text on billboards in images, and more … without explicit training to do so. It has learned how to learn, and yet, we are broadening the scope of our experiments instead of shaking these people by the shoulders and asking, “What in the hell are you thinking, man!?”
In December 2022, awash in recent AI achievements, it concerned me that much of the technology had become very synergistic during the previous couple of years. Essentially: AI-type-X (e.g. Stable Diffusion) can help improve AI-type-Y (e.g. Tesla self-driving) across many, many pairs of X and Y. And now, not even 4 months after that, we have papers released on GPT4′s ability to self-reflect and self-improve. Given that it is widely known how badly human minds predict geometric progression, I have started to feel like we are already past the AI singularity “event horizon.” Even slamming on the brakes now doesn’t seem like it will do much to stop our fall into this abyss (not to mention how unaligned the incentives of Microsoft, Tesla, and Google are from pulling the train brake). My imaginary “event horizon” was always “self-improvement” given that transistorized neurons would behave so much faster than chemical ones. Well, here we are. We’ve had dozens of emergent properties of AI over the past year, and barely anyone knows that it can use tools, learned to read text on billboards in images, and more … without explicit training to do so. It has learned how to learn, and yet, we are broadening the scope of our experiments instead of shaking these people by the shoulders and asking, “What in the hell are you thinking, man!?”