Honestly this kinda feels like what LLM agents do… with the exception that LLM agents have been trained on a vast corpus including lots of fiction, so their definition of “actions humans would take” tend to be fairly skewed sometimes on some particular topics.
Honestly this kinda feels like what LLM agents do… with the exception that LLM agents have been trained on a vast corpus including lots of fiction, so their definition of “actions humans would take” tend to be fairly skewed sometimes on some particular topics.