The code for Auto-GPT itself is very easy to write (mostly written by a few developers as a side project, I think?)
The fact that the current code is very easy to write does not in itself suggest that you don’t get something more powerful when you spent more effort.
In general, LLM’s can be fine-tuned to specific applications. Currently, Auto-GPT isn’t benefiting from fine-tuning but it could be fine-tuned and such efforts can take more cognitive work.
The fact that the current code is very easy to write does not in itself suggest that you don’t get something more powerful when you spent more effort.
In general, LLM’s can be fine-tuned to specific applications. Currently, Auto-GPT isn’t benefiting from fine-tuning but it could be fine-tuned and such efforts can take more cognitive work.