Here are some relevant blockquotes of Bostrom’s reasoning on brain-computer interfaces, from Superintelligence chapter 2:
It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65
To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. … One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.
Futhermore:
enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.
Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.
To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. … One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls.
I was aware that BCI would be dangerous, but I wasn’t aware that current BCI, with very limited bandwidth, was already so dangerous. As I said, one could try only interfacing with the surface of the brain—the exact opposite of deep brain stimulation—which is less invasive but does massively reduce options.
Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.
Outgoing bandwidth, OTOH, is only a few bits per second. Better to pick the low-hanging fruit.
The decoder provides remarkable reconstructions of the viewed movies.
Although I don’t know the specifics because of the paywall.
Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence.
As I said, I think taking information out of the brain would happen long before this scenario. But in the more futuristic case of an exocortex, there would still be a period where some parts of the brain can be emulated, but others can’t, and so a hybrid system would still be superior.
I noticed that you don’t have a green arrow pointing from BCI to WBE or AI in your diagram. It seems like if BCIs make people smarter, that should allow them to do WBE/AI research more effectively. Thoughts?
To begin with, there are significant risks of medical complications—including infections, electrode
displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
enhancement is likely to be far more difficult than therapy.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
Not only can the human retina transmit data at an impressive rate of
nearly 10 million bits per second, but it comes pre-packaged with a
massive amount of dedicated wetware, the visual cortex, that is highly
adapted to extracting meaning from this information torrent and to
interfacing with other brain areas for further processing.70 Even if
there were an easy way of pumping more information into our brains,
the extra data inflow would do little to increase the rate at which we
think and learn unless all the neural machinery necessary for making
sense of the data were similarly upgraded.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
Here are some relevant blockquotes of Bostrom’s reasoning on brain-computer interfaces, from Superintelligence chapter 2:
Futhermore:
Thanks for the quotes.
I was aware that BCI would be dangerous, but I wasn’t aware that current BCI, with very limited bandwidth, was already so dangerous. As I said, one could try only interfacing with the surface of the brain—the exact opposite of deep brain stimulation—which is less invasive but does massively reduce options.
Outgoing bandwidth, OTOH, is only a few bits per second. Better to pick the low-hanging fruit.
Coincidentally, I ran into a paper Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies which seems to claim to extract large amounts of information from the visual cortex via fMRI:
Although I don’t know the specifics because of the paywall.
As I said, I think taking information out of the brain would happen long before this scenario. But in the more futuristic case of an exocortex, there would still be a period where some parts of the brain can be emulated, but others can’t, and so a hybrid system would still be superior.
I noticed that you don’t have a green arrow pointing from BCI to WBE or AI in your diagram. It seems like if BCIs make people smarter, that should allow them to do WBE/AI research more effectively. Thoughts?
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.