Thus my model (or the systems/cybernetic model in general) correctly predicted—well in advance—that LLMs would have anthropomorphic cognition: mirroring much or our seemingly idiosyncratic cognitive biases, quirks, and limitations. Thus we have AGI that can write poems and code (like humans) but struggles with multiplying numbers (like humans), generally exhibits human like psychology, is susceptible to flattery, priming, the Jungian “shadow self” effect, etc.
Is this comment the best example of your model predicting anthropomorphic cognition? I recognize my comment here could sound snarky (“is that the best you can do?”); it’s not intended that way—I’m sincerely asking if that is the best example, or if there are better ones in addition.
Is this comment the best example of your model predicting anthropomorphic cognition? I recognize my comment here could sound snarky (“is that the best you can do?”); it’s not intended that way—I’m sincerely asking if that is the best example, or if there are better ones in addition.
It’s more fleshed out in the 2015 ULM post.