Two points that seem relevant here:
To what extent are “things like LLMs” and “things like AutoGPT” very different creatures, with the latter sometimes behaving like a unitary agent?
Assuming that the distinction in (1) matters, how often do we expect to see AutoGPT-like things?
(At the moment, both of these questions seem open.)
Two points that seem relevant here:
To what extent are “things like LLMs” and “things like AutoGPT” very different creatures, with the latter sometimes behaving like a unitary agent?
Assuming that the distinction in (1) matters, how often do we expect to see AutoGPT-like things?
(At the moment, both of these questions seem open.)