Really good post. Based on this, it seems extremely valuable to me to test the assumption that we already have animal-level AIs. I understand that this is difficult due to built-in brain structure in animals, different training distributions, and the difficulty of creating a simulation as complex as real life. It still seems like we could test this assumption by doing something along the lines of training a neural network to perform as well as a cat’s visual cortex on image recognition. I predict that if this was done in a way that accounted for the flexibility of real animals that the AI wouldn’t perform better than an animal at around cat or raven level (80% confidence). I predict that even if AI was able to out-perform a part of an animal’s brain in one area, it would not be able to out-perform the animal in more than 3 separate areas as broad as vision (60% confidence). I am quite skeptical of greater than 20% probability of AGI in less than 10 years, but contrary evidence here could definitely make me change my mind.
To be clear the comparison to animal brains is one of roughly equivalent capabilities/intelligence and ultimately—economic value. A direct model of even a small animal brain—like that of a honey bee—may very well come after AGI, because of lack of economic incentives.
It still seems like we could test this assumption by doing something along the lines of training a neural network to perform as well as a cat’s visual cortex on image recognition. I predict that if this was done in a way that accounted for the flexibility of real animals that the AI wouldn’t perform better than an animal at around cat or raven level
We already have trained ANNs to perform as well as human visual cortex on image recognition, so I don’t quite get what you mean by “accounted for the flexibility of real animals”. And LLMs perform as well as human linguistic cortex in most respects.
Computer vision is just scanning for high probability matches between an area of the image and a set of tokenized segments that have an assigned label. No conceptual understanding of objects or actions in an image. No internal representation, and no expectations for what should “be there” a moment later. And no form of attention to drive focus (area of interest).
Canned performances and human control just off camera give the false impression of animal behaviors in what we see today, but there has been little progress since the mid-1980′s into behavior-driven research. *learning to play a video game with only 20 hours of real-time play would be a better measure than trying to understand (and match) animal minds (though good research in the direction of human-level will absolutely include that).
Really good post. Based on this, it seems extremely valuable to me to test the assumption that we already have animal-level AIs. I understand that this is difficult due to built-in brain structure in animals, different training distributions, and the difficulty of creating a simulation as complex as real life. It still seems like we could test this assumption by doing something along the lines of training a neural network to perform as well as a cat’s visual cortex on image recognition. I predict that if this was done in a way that accounted for the flexibility of real animals that the AI wouldn’t perform better than an animal at around cat or raven level (80% confidence). I predict that even if AI was able to out-perform a part of an animal’s brain in one area, it would not be able to out-perform the animal in more than 3 separate areas as broad as vision (60% confidence). I am quite skeptical of greater than 20% probability of AGI in less than 10 years, but contrary evidence here could definitely make me change my mind.
To be clear the comparison to animal brains is one of roughly equivalent capabilities/intelligence and ultimately—economic value. A direct model of even a small animal brain—like that of a honey bee—may very well come after AGI, because of lack of economic incentives.
We already have trained ANNs to perform as well as human visual cortex on image recognition, so I don’t quite get what you mean by “accounted for the flexibility of real animals”. And LLMs perform as well as human linguistic cortex in most respects.
Computer vision is just scanning for high probability matches between an area of the image and a set of tokenized segments that have an assigned label. No conceptual understanding of objects or actions in an image. No internal representation, and no expectations for what should “be there” a moment later. And no form of attention to drive focus (area of interest).
Canned performances and human control just off camera give the false impression of animal behaviors in what we see today, but there has been little progress since the mid-1980′s into behavior-driven research. *learning to play a video game with only 20 hours of real-time play would be a better measure than trying to understand (and match) animal minds (though good research in the direction of human-level will absolutely include that).