I think the big, obvious, enormous difference between GPT-3 and the human brain is that GPT-3 isn’t an agent. It’s not trained for behavior; it’s adjusted for accuracy.
It’s true that GPT-3 doesn’t do everything that a human brain does, but one of my thoughts when reading Duncan’s post on shoulder advisors was that it really sounds like the brain runs something like GPT-? instances that can be trained on various prediction tasks.
It’s true that GPT-3 doesn’t do everything that a human brain does, but one of my thoughts when reading Duncan’s post on shoulder advisors was that it really sounds like the brain runs something like GPT-? instances that can be trained on various prediction tasks.
Something of an side, but what exactly is your definition of ‘agent’?
Depends on the context. :)
If I had to give a general definition, something like “a system whose behavior can usefully be predicted through the intentional stance”.