I mean something like AutoGPT, where there is no human in the loop who could reset the history.
For example, I’ve seen how ChaosGPT got into a loop of “researching nuclear weapons”. Probably if it could erase them completely from its context, it would generate more interesting ideas (though, there is still a question whether we need that).
That is trivial to program? For example, you can have AutoGPT UI which lists pending tasks with icons next to them, where clicking a trashcan will completely erase it from the context. That doesn’t need any LLM-level help like LEACE.
And of course you could also have another LLM instance with specific instructions acting as some kind of censor which judges which prompts should be erased automatically.
What do you mean? Current LLMs are stateless. If unsuccessful attempts to solve the task are made, just reset the history and retry.
I mean something like AutoGPT, where there is no human in the loop who could reset the history.
For example, I’ve seen how ChaosGPT got into a loop of “researching nuclear weapons”. Probably if it could erase them completely from its context, it would generate more interesting ideas (though, there is still a question whether we need that).
That is trivial to program? For example, you can have AutoGPT UI which lists pending tasks with icons next to them, where clicking a trashcan will completely erase it from the context. That doesn’t need any LLM-level help like LEACE.
And of course you could also have another LLM instance with specific instructions acting as some kind of censor which judges which prompts should be erased automatically.