To begin with, there are significant risks of medical complications—including infections, electrode
displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
enhancement is likely to be far more difficult than therapy.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
Not only can the human retina transmit data at an impressive rate of
nearly 10 million bits per second, but it comes pre-packaged with a
massive amount of dedicated wetware, the visual cortex, that is highly
adapted to extracting meaning from this information torrent and to
interfacing with other brain areas for further processing.70 Even if
there were an easy way of pumping more information into our brains,
the extra data inflow would do little to increase the rate at which we
think and learn unless all the neural machinery necessary for making
sense of the data were similarly upgraded.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.