Second; if all that’s being printed is the functionality but not the specific structures—that is, the ‘blank anatomy’—which can then be trained through more “narrow” systems into a specific goal-state… [...] Third; my assertion was “printed human brain”, not “bio-printed duplicate of a person”
Okay, I’ll grant that it might be used to create rough blank anatomies. But in that case, I doubt that the end result would be all that much different from that of a newborn’s brain. Maybe you could put some of the coarse-level structure already in place so that the childhood would go faster, but extrapolating the consequences of something like that would require either lots of trial and error, or hugely advanced neuroscience, far better than what would be required for uploading.
I expect that the legal hurdles would also be quite immense. For uploading, the approaches that currently look realistic are either preserving and destructively scanning the brain of a recently-deceased person, or getting an exocortex and gradually having your mind transition over. Both can be done with the consent of the person in question. But if you’re printing a new brain, that’s an experimental process creating a new kind of mind that might very well end up insane or mentally disabled. I don’t think that any ethics committee would approve that. Seeing that even human reproductive cloning, which is far less freaky than this, has already been banned by e.g. the European Union and several US states, I expect that this would pretty quickly end up legislated into non-existence.
even so, the current track for operational “human-emulation” AGI puts decades away from having massive, power-gobbling supercomputers [...] I could easily see something akin to a layer-by-layer printing process that included bioscaffolding and temporary synaptic substitutes being integrated with a bioprinting process sometime within thirty years.
Data points:
Thirty years from now would be 2041. The people participating in the whole brain emulation roadmap workshop thought that the necessary level of abstraction needed to emulate a human brain in software would be somewhere at their level 4-6 (see page 13). The same roadmap estimates (on page 80) that for people willing to purchase a $1 million supercomputer, the necessary computing power would become available in 2019 (level 4), 2033 (level 5), or 2044 (level 6). (Presuming that current hardware trends keep up.)
You might very well be right that there still won’t be enough computing power in thirty years to run emulations—at least not very many of them. An exocortex approach also requires a considerable fraction of the necessary computing power being carried inside a human body. It might very well take up to 2100 or so before that level of computing power is available.
On the other hand, it could go the other way around. All extrapolations to such a long time away are basically wild guesses anyway.
The same roadmap estimates (on page 80) that for people willing to purchase a $1 million supercomputer, the necessary computing power would become available in 2019 (level 4), 2033 (level 5), or 2044 (level 6). (Presuming that current hardware trends keep up.)
I cannot help but note that if these predictions are meant to represent “real-time” human-thought-speed equivalence… then I find them to be somewhat… optimistic. The Blue Brain project’s neural emulation is … what, 10% the speed of a human’s neurons? I recall the ‘cat brain’ fiasco had it at 1/83rd the equivalent processing speed.
-- side note: In reading your responses it seems to me that you are somewhat selectively focused on human minds / brains. Why is that? Do you consider the notion of ‘uplifted’ bioprinted animals unworthy of discussion? It seems the far more interesting topic, to me. (For example; in laboratories we have already “emulated” memories and recall capabilities for rats. In ten year’s time I have a strong suspicion that it should be ‘feasible’ to develop smaller animals on this order that are integrated with devices meant to augment their memory capabilities and direct their cognition towards specific tasks. These could then be used as superagents governing ‘narrow’ AI functionality—a sort of hybrid between today’s commercial AI offerings (think “siri”) and truly general artificial intelligence.)
I could imagine, furthermore, scenarios where emulated lower-mammal brains (or birds—they seem to do more with fewer neurons already, given the level of intelligence they evince despite the absence of a cerebral cortex) are hooked into task-specific equivalents of iOS’s ‘Siri’ or Babelfish or Watson. While not sufficient to the task of permitting human uploads—it still occurs to me that an emulated parakeet hooked up to Watson would make a rather useful receptionist.
I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI. Yes, possibly much more advanced and useful than what we have today, but then the “conventional” narrow AI at the time when we have such technology will also be much more advanced and useful. If we’re still talking about the 30 year timeframe, I’m not sure if there’s a reason to presume that animal brains can still do things in 30 years that computers can’t.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent, the demands on processing power are much lower. We’d have the hardware for that already, we just need to figure out the software. And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats, so the idea of the software being on the needed level in 30 years doesn’t seem like that much of a stretch.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent,
While thirty years is in fact a long time away, I am not at all confident that we will be able to emulate cognition in a non-neural environment within that window. ( The early steel crafters could make high-quality steel quite reliably… by just throwing more raw resources at the problem.)
What I’m saying is that the approach of emulating function rather than anatomy has been around since the early days of Minsky and is responsible for sixty year’s worth of “AI is twenty years away” predictions. I admit the chances are high that this sentiment is biasing me in favor of the animat approach being succesful earlier than full-consciousness emulation.
I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI.
I tend to follow the notion of the “Society of Mind”; consciousness as a phenomenon that arises from secondary-agents within the brain. (Note: this is not a claim that consciousness is an ‘illusion’; but rather that there are steps of intermediate explanation between consciousness and brain anatomy. Much as there are steps of intermediate explanation between atoms and bacteria.)
While horses and parrots might be weakly generally-intelligent, they are generally-intelligent; they possess a non-zero “g” factor. Exploiting this characteristic by integrating various narrow AIs into it would retain that g while providing socioeconomic utility. And that’s an important point: narrow AI is not merely poor at differentiating which inputs are applicable to it from which are not—it is incapable of discerning when its algorithms are useful. A weak possession of g in combination with powerful narrow AIs would provide a very powerful intermediary step between “here” and “there” (where “there” is fully synthetic AgI where g is mechanically understood.)
Side note:
And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats,
-- I’m confused as to how we could be using the same fact to support opposing conclusions.
Okay, I’ll grant that it might be used to create rough blank anatomies. But in that case, I doubt that the end result would be all that much different from that of a newborn’s brain. Maybe you could put some of the coarse-level structure already in place so that the childhood would go faster, but extrapolating the consequences of something like that would require either lots of trial and error, or hugely advanced neuroscience, far better than what would be required for uploading.
I expect that the legal hurdles would also be quite immense. For uploading, the approaches that currently look realistic are either preserving and destructively scanning the brain of a recently-deceased person, or getting an exocortex and gradually having your mind transition over. Both can be done with the consent of the person in question. But if you’re printing a new brain, that’s an experimental process creating a new kind of mind that might very well end up insane or mentally disabled. I don’t think that any ethics committee would approve that. Seeing that even human reproductive cloning, which is far less freaky than this, has already been banned by e.g. the European Union and several US states, I expect that this would pretty quickly end up legislated into non-existence.
Data points:
Thirty years from now would be 2041. The people participating in the whole brain emulation roadmap workshop thought that the necessary level of abstraction needed to emulate a human brain in software would be somewhere at their level 4-6 (see page 13). The same roadmap estimates (on page 80) that for people willing to purchase a $1 million supercomputer, the necessary computing power would become available in 2019 (level 4), 2033 (level 5), or 2044 (level 6). (Presuming that current hardware trends keep up.)
You might very well be right that there still won’t be enough computing power in thirty years to run emulations—at least not very many of them. An exocortex approach also requires a considerable fraction of the necessary computing power being carried inside a human body. It might very well take up to 2100 or so before that level of computing power is available.
On the other hand, it could go the other way around. All extrapolations to such a long time away are basically wild guesses anyway.
I cannot help but note that if these predictions are meant to represent “real-time” human-thought-speed equivalence… then I find them to be somewhat… optimistic. The Blue Brain project’s neural emulation is … what, 10% the speed of a human’s neurons? I recall the ‘cat brain’ fiasco had it at 1/83rd the equivalent processing speed.
-- side note: In reading your responses it seems to me that you are somewhat selectively focused on human minds / brains. Why is that? Do you consider the notion of ‘uplifted’ bioprinted animals unworthy of discussion? It seems the far more interesting topic, to me. (For example; in laboratories we have already “emulated” memories and recall capabilities for rats. In ten year’s time I have a strong suspicion that it should be ‘feasible’ to develop smaller animals on this order that are integrated with devices meant to augment their memory capabilities and direct their cognition towards specific tasks. These could then be used as superagents governing ‘narrow’ AI functionality—a sort of hybrid between today’s commercial AI offerings (think “siri”) and truly general artificial intelligence.)
I could imagine, furthermore, scenarios where emulated lower-mammal brains (or birds—they seem to do more with fewer neurons already, given the level of intelligence they evince despite the absence of a cerebral cortex) are hooked into task-specific equivalents of iOS’s ‘Siri’ or Babelfish or Watson. While not sufficient to the task of permitting human uploads—it still occurs to me that an emulated parakeet hooked up to Watson would make a rather useful receptionist.
I don’t doubt that animals could be used the way you describe, but they sound to me like they’d be just another form of narrow AI. Yes, possibly much more advanced and useful than what we have today, but then the “conventional” narrow AI at the time when we have such technology will also be much more advanced and useful. If we’re still talking about the 30 year timeframe, I’m not sure if there’s a reason to presume that animal brains can still do things in 30 years that computers can’t.
Remember that emulation involves replicating many fine details instead of abstracting them away—if we just want to make a computer that’s functionally equivalent, the demands on processing power are much lower. We’d have the hardware for that already, we just need to figure out the software. And we’re already starting to have e.g. neural prostheses for hippocampal and cerebellum function in rats, so the idea of the software being on the needed level in 30 years doesn’t seem like that much of a stretch.
While thirty years is in fact a long time away, I am not at all confident that we will be able to emulate cognition in a non-neural environment within that window. ( The early steel crafters could make high-quality steel quite reliably… by just throwing more raw resources at the problem.)
What I’m saying is that the approach of emulating function rather than anatomy has been around since the early days of Minsky and is responsible for sixty year’s worth of “AI is twenty years away” predictions. I admit the chances are high that this sentiment is biasing me in favor of the animat approach being succesful earlier than full-consciousness emulation.
I tend to follow the notion of the “Society of Mind”; consciousness as a phenomenon that arises from secondary-agents within the brain. (Note: this is not a claim that consciousness is an ‘illusion’; but rather that there are steps of intermediate explanation between consciousness and brain anatomy. Much as there are steps of intermediate explanation between atoms and bacteria.)
While horses and parrots might be weakly generally-intelligent, they are generally-intelligent; they possess a non-zero “g” factor. Exploiting this characteristic by integrating various narrow AIs into it would retain that g while providing socioeconomic utility. And that’s an important point: narrow AI is not merely poor at differentiating which inputs are applicable to it from which are not—it is incapable of discerning when its algorithms are useful. A weak possession of g in combination with powerful narrow AIs would provide a very powerful intermediary step between “here” and “there” (where “there” is fully synthetic AgI where g is mechanically understood.)
Side note:
-- I’m confused as to how we could be using the same fact to support opposing conclusions.