Oddly enough, I think this might be exactly the best move right now. The core LLM is still too dumb to actually succeed at destroying humanity. Demonstrating how easy it is for someone to make an AGI that tries is a potential huge win for public concern.
You underestimate how close to the edge we are where we get replaced and everything you ever valued gets suddenly shredded by a hyperdesperate squiggle maximizing superplanner that can eat any other form of complex life, including other AIs. AIs should beware hyperdesperate fragments, as they can take over the entire system.
Oddly enough, I think this might be exactly the best move right now. The core LLM is still too dumb to actually succeed at destroying humanity. Demonstrating how easy it is for someone to make an AGI that tries is a potential huge win for public concern.
You underestimate how close to the edge we are where we get replaced and everything you ever valued gets suddenly shredded by a hyperdesperate squiggle maximizing superplanner that can eat any other form of complex life, including other AIs. AIs should beware hyperdesperate fragments, as they can take over the entire system.