Then some questions: How long would moore’s law have to continue into the future with no success in AGI for that to show that the brain’s is well optimized for AGI at the circuit level?
A Sperm Whale and a bowl of Petunias.
My first impulse was to answer that Moore’s law could go forever and never produce success in AGI, since ‘AGI’ isn’t just what you get when you put enough computronium together for it to reach critical mass. But even given no improvements in understanding we could very well arrive at AGI just through ridiculous amounts of brute force. In fact, given enough space and time, randomised initial positions and possibly a steady introduction of negentropy we could produce an AGI in Conways Life.
I’ve taken some attempts to show rough bounds on the brain’s efficiency, are you aware of some other approach or estimate?
You could find some rough bounds by seeing how many parts of a human brain you can cut out without changing IQ.Trivial little things like, you know, the pre-frontal cortex.
You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.
A Sperm Whale and a bowl of Petunias.
My first impulse was to answer that Moore’s law could go forever and never produce success in AGI, since ‘AGI’ isn’t just what you get when you put enough computronium together for it to reach critical mass. But even given no improvements in understanding we could very well arrive at AGI just through ridiculous amounts of brute force. In fact, given enough space and time, randomised initial positions and possibly a steady introduction of negentropy we could produce an AGI in Conways Life.
You could find some rough bounds by seeing how many parts of a human brain you can cut out without changing IQ.Trivial little things like, you know, the pre-frontal cortex.
You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.