I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI. Yes, possibly much more advanced and useful than what we have today, but then the “conventional” narrow AI at the time when we have such technology will also be much more advanced and useful. If we’re still talking about the 30 year timeframe, I’m not sure if there’s a reason to presume that animal brains can still do things in 30 years that computers can’t.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent, the demands on processing power are much lower. We’d have the hardware for that already, we just need to figure out the software. And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats, so the idea of the software being on the needed level in 30 years doesn’t seem like that much of a stretch.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent,
While thirty years is in fact a long time away, I am not at all confident that we will be able to emulate cognition in a non-neural environment within that window. ( The early steel crafters could make high-quality steel quite reliably… by just throwing more raw resources at the problem.)
What I’m saying is that the approach of emulating function rather than anatomy has been around since the early days of Minsky and is responsible for sixty year’s worth of “AI is twenty years away” predictions. I admit the chances are high that this sentiment is biasing me in favor of the animat approach being succesful earlier than full-consciousness emulation.
I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI.
I tend to follow the notion of the “Society of Mind”; consciousness as a phenomenon that arises from secondary-agents within the brain. (Note: this is not a claim that consciousness is an ‘illusion’; but rather that there are steps of intermediate explanation between consciousness and brain anatomy. Much as there are steps of intermediate explanation between atoms and bacteria.)
While horses and parrots might be weakly generally-intelligent, they are generally-intelligent; they possess a non-zero “g” factor. Exploiting this characteristic by integrating various narrow AIs into it would retain that g while providing socioeconomic utility. And that’s an important point: narrow AI is not merely poor at differentiating which inputs are applicable to it from which are not—it is incapable of discerning when its algorithms are useful. A weak possession of g in combination with powerful narrow AIs would provide a very powerful intermediary step between “here” and “there” (where “there” is fully synthetic AgI where g is mechanically understood.)
Side note:
And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats,
-- I’m confused as to how we could be using the same fact to support opposing conclusions.
I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI. Yes, possibly much more advanced and useful than what we have today, but then the “conventional” narrow AI at the time when we have such technology will also be much more advanced and useful. If we’re still talking about the 30 year timeframe, I’m not sure if there’s a reason to presume that animal brains can still do things in 30 years that computers can’t.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent, the demands on processing power are much lower. We’d have the hardware for that already, we just need to figure out the software. And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats, so the idea of the software being on the needed level in 30 years doesn’t seem like that much of a stretch.
While thirty years is in fact a long time away, I am not at all confident that we will be able to emulate cognition in a non-neural environment within that window. ( The early steel crafters could make high-quality steel quite reliably… by just throwing more raw resources at the problem.)
What I’m saying is that the approach of emulating function rather than anatomy has been around since the early days of Minsky and is responsible for sixty year’s worth of “AI is twenty years away” predictions. I admit the chances are high that this sentiment is biasing me in favor of the animat approach being succesful earlier than full-consciousness emulation.
I tend to follow the notion of the “Society of Mind”; consciousness as a phenomenon that arises from secondary-agents within the brain. (Note: this is not a claim that consciousness is an ‘illusion’; but rather that there are steps of intermediate explanation between consciousness and brain anatomy. Much as there are steps of intermediate explanation between atoms and bacteria.)
While horses and parrots might be weakly generally-intelligent, they are generally-intelligent; they possess a non-zero “g” factor. Exploiting this characteristic by integrating various narrow AIs into it would retain that g while providing socioeconomic utility. And that’s an important point: narrow AI is not merely poor at differentiating which inputs are applicable to it from which are not—it is incapable of discerning when its algorithms are useful. A weak possession of g in combination with powerful narrow AIs would provide a very powerful intermediary step between “here” and “there” (where “there” is fully synthetic AgI where g is mechanically understood.)
Side note:
-- I’m confused as to how we could be using the same fact to support opposing conclusions.