Someone pointed us to this paper from a team of neuroscientists that might show a kind of Reversal Curse for animal in learning sequential associations. I haven’t read the paper yet.
The comparison with non-human primates is generally instructive. ChatGPT commits a number of errors that we have seen in non-human primates learning human languages. E.g. initially implicitly self-describing as a human (ask ChatGPT about ethical problems in AI, and you will soon get a “*We* must use AI responsibly”), because their training data was written by humans describing their point of view, and data about a point of view that is non-human is absent, so they latch onto the point of view that seems the closest option at first.
It is notable that non-human primates did move past that (to e.g. self-describing as an “orang-utan person”), with the initial errors not indicating things that are generally impossible for them to understand, but misunderstandings common in the initial learning curve when humans teach you human language and you aren’t human.
And that ChatGPT’s equivalent of a brain is rapidly evolving. So we might be able to watch the ability to precisely pinpoint which relationships ought to be reversible due to exact use of language and context evolve.
Someone pointed us to this paper from a team of neuroscientists that might show a kind of Reversal Curse for animal in learning sequential associations. I haven’t read the paper yet.
.
Thanks for sharing!
The comparison with non-human primates is generally instructive. ChatGPT commits a number of errors that we have seen in non-human primates learning human languages. E.g. initially implicitly self-describing as a human (ask ChatGPT about ethical problems in AI, and you will soon get a “*We* must use AI responsibly”), because their training data was written by humans describing their point of view, and data about a point of view that is non-human is absent, so they latch onto the point of view that seems the closest option at first.
It is notable that non-human primates did move past that (to e.g. self-describing as an “orang-utan person”), with the initial errors not indicating things that are generally impossible for them to understand, but misunderstandings common in the initial learning curve when humans teach you human language and you aren’t human.
And that ChatGPT’s equivalent of a brain is rapidly evolving. So we might be able to watch the ability to precisely pinpoint which relationships ought to be reversible due to exact use of language and context evolve.