No knowledge of prior art, but what do you mean by negative things that give people ideas? I was under impression that most of the examples people talk about involved things that were pretty limited to superhuman capabilities for the time being—self-replicating nanotech and so on. Or are you asking about something other than extinction risk, like chatbots manipulating people or something along those lines? Could you clarify?
As soon as someone managed to turn ChatGPT into an agent (AutoGPT), someone created an agent, ChaosGPT, with the explicit goal to destroy humankind. This is the kind of person that might benefit from having what I intend to produce: an overview of AI capabilities required to end the world, how far along we are in obtaining them, and so on. I want this information to be used to prevent an existential catastrophe, not precipitate it.
No knowledge of prior art, but what do you mean by negative things that give people ideas? I was under impression that most of the examples people talk about involved things that were pretty limited to superhuman capabilities for the time being—self-replicating nanotech and so on. Or are you asking about something other than extinction risk, like chatbots manipulating people or something along those lines? Could you clarify?
As soon as someone managed to turn ChatGPT into an agent (AutoGPT), someone created an agent, ChaosGPT, with the explicit goal to destroy humankind. This is the kind of person that might benefit from having what I intend to produce: an overview of AI capabilities required to end the world, how far along we are in obtaining them, and so on. I want this information to be used to prevent an existential catastrophe, not precipitate it.