You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.
You are just talking around my questions, so let me make it more concrete. An important task of any AGI is higher level sensor data interpretation—ie seeing. We have an example system in the human brain—the human visual system, which is currently leaps and bounds beyond the state of the art in machine vision. (although the latter is making progress towards the former through reverse engineering)
So machine vision is a subtask of AGI. What is the minimal computational complexity of human-level vision? This is a concrete computer science problem. It has a concrete answer—not “sperm whale and petunia” nonsense.
Until someone makes a system better than HVS, or proves some complexity bounds, we don’t know how optimal HVS is for this problem, but we also have no reason to believe that it is orders of magnitude off from the theoretic optimum.