If it is this easy to create an autonomous agent that can do major damage, much better to find that out now rather than wait when the damage would be worse or even existential. If such a program poses an existential risk now, then we live in a very very doomed world, and a close call as soon as possible would likely be our only hope of survival.
If you mean it, then you can as well link to the Github repo, suggestively named “Baby AGI”:
If you mean it, then you can as well link to the Github repo, suggestively named “Baby AGI”:
https://github.com/yoheinakajima/babyagi