According to Bostrom, brain-computer interfaces are unlikely to yield superintelligence.
This strikes me as slightly surprising. From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each, by which I mean the best aspects at the time at which the BCI is created, rather than trying to posit that biological intelligence has any fundamental advantage that sufficiently advanced computers can never overcome.
A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.
Admittedly I don’t know much about BCI technology, but I have heard promising things about optogenetics. Having to undergo brain surgery is a problem, but the extent of this problem seems to depend upon to what extent the interface needs to penetrate into the brain rather than just overlying the surface. If bootstrapping to greater levels of intelligence required repeated surgery to install better BCIs then this might be problematic, but intelligence gains could also be realised by working on the software, or adding more hardware, or adding more people to a swarm intelligence.
Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE. But that doesn’t make BCI a dead end, so much as a transnational stage.
Finally, in the last few years, Moore’s law has started to show signs of slowing, and this should cause one to update in favour of BCI coming first, as it is probably the path least dependent upon raw computing power (unless de novo AI turns out to be far more computationally efficient than the brain).
As far as social constraints go, I don’t think it would be all that hard to find volunteers, and in fact there is a natural progression from treatment of blindness, mental illnesses and so forth through to transhumanism. Legal challenges are perhaps a more likely problem, but as previously mentioned, medical use will likely provide the precedent to grandfather it in.
Note I’m not saying that this is necessary a desirable path—FAI is preferable—I’m arguing it seems at least somewhat plausible to come first. Having said that, in the event that progress on FAI is slower and other existential threats loom, than BCI could perhaps be a sensible backup plan.
Here are some relevant blockquotes of Bostrom’s reasoning on brain-computer interfaces, from Superintelligence chapter 2:
It is sometimes proposed that direct brain–computer interfaces, particularly implants, could enable humans to exploit the fortes of digital computing—perfect recall, speedy and accurate arithmetic calculation, and high-bandwidth data transmission—enabling the resulting hybrid system to radically outperform the unaugmented brain.64 But although the possibility of direct connections between human brains and computers has been demonstrated, it seems unlikely that such interfaces will be widely used as enhancements any time soon.65
To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. … One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls. Treated subjects also reported more cognitive complaints.66 Such risks and side effects might be tolerable if the procedure is used to alleviate severe disability. But in order for healthy subjects to volunteer themselves for neurosurgery, there would have to be some very substantial enhancement of normal functionality to be gained.
Futhermore:
enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators.67 Patients who are deaf or blind might benefit from artificial cochleae and retinas.68 Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain.69 What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet.
Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.70 Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone. So this limiting case just takes us back to the AI path, which we have already examined.
To begin with, there are significant risks of medical complications—including infections, electrode displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain. … One study of Parkinson patients who had received deep brain implants showed reductions in verbal fluency, selective attention, color naming, and verbal memory compared with controls.
I was aware that BCI would be dangerous, but I wasn’t aware that current BCI, with very limited bandwidth, was already so dangerous. As I said, one could try only interfacing with the surface of the brain—the exact opposite of deep brain stimulation—which is less invasive but does massively reduce options.
Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing.
Outgoing bandwidth, OTOH, is only a few bits per second. Better to pick the low-hanging fruit.
The decoder provides remarkable reconstructions of the viewed movies.
Although I don’t know the specifics because of the paywall.
Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence.
As I said, I think taking information out of the brain would happen long before this scenario. But in the more futuristic case of an exocortex, there would still be a period where some parts of the brain can be emulated, but others can’t, and so a hybrid system would still be superior.
I noticed that you don’t have a green arrow pointing from BCI to WBE or AI in your diagram. It seems like if BCIs make people smarter, that should allow them to do WBE/AI research more effectively. Thoughts?
To begin with, there are significant risks of medical complications—including infections, electrode
displacement, hemorrhage, and cognitive decline—when implanting electrodes in the brain.
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
enhancement is likely to be far more difficult than therapy.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
Not only can the human retina transmit data at an impressive rate of
nearly 10 million bits per second, but it comes pre-packaged with a
massive amount of dedicated wetware, the visual cortex, that is highly
adapted to extracting meaning from this information torrent and to
interfacing with other brain areas for further processing.70 Even if
there were an easy way of pumping more information into our brains,
the extra data inflow would do little to increase the rate at which we
think and learn unless all the neural machinery necessary for making
sense of the data were similarly upgraded.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
We have already entered the transitional phase of BCI via the keyboard and mouse, and now touchscreen.
I’m not just being a smartass. The momentum is on BCI’s side; It’s not hard to imagine an externally wearable device that you could query with a thought which would then return an answer to you of a higher quality than the best search or question answering today. Tightening information and feedback loops provides large cognitive boost; surgical methods would just be a bonus.
True—in fact, this has been going on since the invention of the printing press. But I think we’ve exhausted all the low-hanging fruit here, in that we already have access to all the public domain knowlage of humanity at our fingertips, including crude automatic translations of other languages and tools like Siri or Wolfram alpha.
But it’s not easily usable, and really it’s only for general-domain knowledge or certain types of broadly available statistics. Consider the difference between having to search on a given topic and having a subject matter expert on that topic on the phone, (especially pretty academic or locally-specific ones that have poorer search results). That’s a gap yet to be bridged just by conventional search technology.
Ahh, you’re talking about expert systems. I agree that this does hold a lot of potential—in fact in a related tangent I’ve been spending a lot of time coding some machine learning algorithms, and I can safely say that in their target domain not only are these algos a lot better at inference then I am, but given certain shortcomings that I have not (yet) been able to tackle, the combination of myself and the algos is significantly better than either of us in isolation.
So in a way, I’m already a cyborg, and in this specific case I don’t think a simple BCI would improve matter much.
A full coding cortex OTOH...
Expert Systems suggests a particular set of ideas and functions, and brings to mind software made int he 1980′s that often failed to live up to expectations. I do mean something similar to that, admittedly, but bringing in the best design and information retrieval ideas developed in the 30 years since then.
And yes, when predictions are being made, combining different predictors almost always yields superior results. Another natural “cyborg” area.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each
Unfortunately the ‘biological’ part is not too great at recursive self improvement (at the fundamental level). It’s a mess. If we merely wanted to create cyborgs with cognitive advantages then this strategy is a no-brainer (except literally). If we are trying to create a superintelligence then the recursive self improvement feature is more or less obligatory.
Indeed, the mechanical component is far better at recursive self improvement, which is why I wrote:
Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE.
It also occurs that for a ‘hive mind’ using the enhanced intelligence to acquire the resources to connect more people to the hive mind counts as quantitative, if not qualitative, biological self improvement.
A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.
The question is whether that team would more efficient if you give them a high functioning BCI. I’m not sure that’s true.
I read somewhere that Kasparov was considering three moves per second while deep blue considers billions.
If you consider a move and it takes a few seconds to enter it into a computer, as opposed to being read from your brain, a analysed, and a preliminary evaluation (enough to check that there are no obvious flaws) returning to your brain within milliseconds, then this seems like a several-fold speed up. True, its a quantitative not qualitative speedup, but then this is just a BCI capable of transmitting thoughts consisting of a few bytes.
I read somewhere that Kasparov was considering three moves per second while deep blue considers billions.
What exactly do you mean with the word “consider” in that sentence? I think your trivial idea of what it might mean isn’t accurate enough to be useful in a case like this.
If you consider a move and it takes a few seconds to enter it into a computer, as opposed to being read from your brain
The idea that reading something from the brain takes no time or effect from the brain to parse the thought in a readable way is inaccurate.
I think your trivial idea of what it might mean isn’t accurate enough to be useful in a case like this.
As a tangential point, this seems needlessly aggressive and presumptive. I don’t think you can know whether or not I understand cognitive neuroscience based on one paragraph I have written.
There nothing meant to be aggressive in sentence. I might have removed the word “you” but trivial is a pretty accurate word.
The word consider might be straightforward when you speak about someone’s subjective experience about what he’s doing but it’s not straightforward if you speak about the behavior of billions of neurons.
When solving a go life and death problem I might have only a conscious experience of ‘considering’ 2 moves per second but that doesn’t mean that my brain isn’t effectively analysing many more positions and only brings the interesting one’s up to the level of conscious awareness. If you look at the amount of positions a computer has to go through to solve a go life and death problem that’s the best explanation of how humans can effectively solve life and death problems that effectively take millions or billions of moves.
I don’t think you can know whether or not I understand cognitive neuroscience based on one paragraph I have written.
Your post indicates that you can simply transfer a notion of “consider” that we use in daily life to describing the behavior of large amounts of neurons where we are not conscious of what most of the neurons in our head are doing.
Trivial is simply the accurate word for describing a daily life notion of a word.
What exactly do you mean with the word “consider” in that sentence? I think your trivial idea of what it might mean isn’t accurate enough to be useful in a case like this.
Well, I didn’t personally interview Kasparov so I don’t know any more than you do about what the article I read meant by ‘consider’ although it seems like a fairly unambiguous word to me. Simply tabooing every word in a discussion isn’t necessary helpful, it can simply slow the conversion down.
The idea that reading something from the brain takes no time or effect from the brain to parse the thought in a readable way is inaccurate.
Why should this require any extra effort on the part of the brain? Surely if you can read the relevant area of the brain, and have the computing power and understanding of neuroscience required, then any thought is readable. Are you a neuroscientist?
Surely if you can read the relevant area of the brain, and have the computing power and understanding of neuroscience required, then any thought is readable.
There no way you read billions of neurons accurately at the same time. That just not feasible.
The thing you can do is reading a bunch of neurons or reading a signal that aggregates the activity of a bunch of neurons.
Why should this require any extra effort on the part of the brain?
Nearly all BCI designs that are used in the real world take effort. A bunch of them even have a learning curve.
Are you a neuroscientist?
Depending how on wide you define the term. I have set in university course on neuroscience that discussed BCI’s.
This strikes me as slightly surprising. From a technological standpoint, I would have thought that a hybrid of biological and machine intelligence would be likely to have the best aspects of each, by which I mean the best aspects at the time at which the BCI is created, rather than trying to posit that biological intelligence has any fundamental advantage that sufficiently advanced computers can never overcome. A fairly close analogy is how teams of a competent chessplayer and a laptop chess program can beat both the best humans and computers with far more processing power.
Admittedly I don’t know much about BCI technology, but I have heard promising things about optogenetics. Having to undergo brain surgery is a problem, but the extent of this problem seems to depend upon to what extent the interface needs to penetrate into the brain rather than just overlying the surface. If bootstrapping to greater levels of intelligence required repeated surgery to install better BCIs then this might be problematic, but intelligence gains could also be realised by working on the software, or adding more hardware, or adding more people to a swarm intelligence.
Of course, in the end it would transition to a fully or mostly machine intelligence, either through offloading increasingly more cognition to the machine components until the organic brains were only a tiny fraction of the mind, or though using the increased intelligence to develop FAI/WBE. But that doesn’t make BCI a dead end, so much as a transnational stage.
Finally, in the last few years, Moore’s law has started to show signs of slowing, and this should cause one to update in favour of BCI coming first, as it is probably the path least dependent upon raw computing power (unless de novo AI turns out to be far more computationally efficient than the brain).
As far as social constraints go, I don’t think it would be all that hard to find volunteers, and in fact there is a natural progression from treatment of blindness, mental illnesses and so forth through to transhumanism. Legal challenges are perhaps a more likely problem, but as previously mentioned, medical use will likely provide the precedent to grandfather it in.
Note I’m not saying that this is necessary a desirable path—FAI is preferable—I’m arguing it seems at least somewhat plausible to come first. Having said that, in the event that progress on FAI is slower and other existential threats loom, than BCI could perhaps be a sensible backup plan.
Here are some relevant blockquotes of Bostrom’s reasoning on brain-computer interfaces, from Superintelligence chapter 2:
Futhermore:
Thanks for the quotes.
I was aware that BCI would be dangerous, but I wasn’t aware that current BCI, with very limited bandwidth, was already so dangerous. As I said, one could try only interfacing with the surface of the brain—the exact opposite of deep brain stimulation—which is less invasive but does massively reduce options.
Outgoing bandwidth, OTOH, is only a few bits per second. Better to pick the low-hanging fruit.
Coincidentally, I ran into a paper Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies which seems to claim to extract large amounts of information from the visual cortex via fMRI:
Although I don’t know the specifics because of the paywall.
As I said, I think taking information out of the brain would happen long before this scenario. But in the more futuristic case of an exocortex, there would still be a period where some parts of the brain can be emulated, but others can’t, and so a hybrid system would still be superior.
I noticed that you don’t have a green arrow pointing from BCI to WBE or AI in your diagram. It seems like if BCIs make people smarter, that should allow them to do WBE/AI research more effectively. Thoughts?
This is all going to change over time. (I don’t know how quickly, but there is already work on trans-cranial methods that is showing promise.) If we can’t get the bandwidth quickly enough, we can control infections, electrodes will get smaller and more adaptive.
Admittedly, therapy will come first. That also means that therapy will drive development of techniques that will also be helpful for enhancement. The boundary between the two is blurry, and therapies that shade into enhancement will definitely be developed before pure enhancement, and be easier to sell to end users. For example, for some people, treatment of ADHD spectrum disorders will definitely be therapeutic, while for others it be seen as attractive enhancements.
The visual pathway is impressive, but it’s very limited in the kinds of information it transmits. It’s a poor way of encoding bulk text, for instance. Even questions and answers can be sent far more densely with a much narrower channel. A tool like Google Now that tries to anticipate areas of interest and pre-fetch data before questions arise to consciousness could provide a valuable backchannel, and it wouldn’t need near the bandwidth, so ought to be doable with non-invasive trans-cranial techniques.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
We have already entered the transitional phase of BCI via the keyboard and mouse, and now touchscreen.
I’m not just being a smartass. The momentum is on BCI’s side; It’s not hard to imagine an externally wearable device that you could query with a thought which would then return an answer to you of a higher quality than the best search or question answering today. Tightening information and feedback loops provides large cognitive boost; surgical methods would just be a bonus.
True—in fact, this has been going on since the invention of the printing press. But I think we’ve exhausted all the low-hanging fruit here, in that we already have access to all the public domain knowlage of humanity at our fingertips, including crude automatic translations of other languages and tools like Siri or Wolfram alpha.
But it’s not easily usable, and really it’s only for general-domain knowledge or certain types of broadly available statistics. Consider the difference between having to search on a given topic and having a subject matter expert on that topic on the phone, (especially pretty academic or locally-specific ones that have poorer search results). That’s a gap yet to be bridged just by conventional search technology.
Ahh, you’re talking about expert systems. I agree that this does hold a lot of potential—in fact in a related tangent I’ve been spending a lot of time coding some machine learning algorithms, and I can safely say that in their target domain not only are these algos a lot better at inference then I am, but given certain shortcomings that I have not (yet) been able to tackle, the combination of myself and the algos is significantly better than either of us in isolation.
So in a way, I’m already a cyborg, and in this specific case I don’t think a simple BCI would improve matter much. A full coding cortex OTOH...
Expert Systems suggests a particular set of ideas and functions, and brings to mind software made int he 1980′s that often failed to live up to expectations. I do mean something similar to that, admittedly, but bringing in the best design and information retrieval ideas developed in the 30 years since then.
And yes, when predictions are being made, combining different predictors almost always yields superior results. Another natural “cyborg” area.
Let’s not forget that this is fundamentally an economic question and not just a technological one. “The vast majority of R&D has been conducted by private industry, which performed 70.5 percent ($282.4 billion) of all R&D in 2009.” -http://bit.ly/1meroFB (a great study of R&D since WW2). It’s true that any of the channels towards strong AI would have abundant applications to sustain them in the marketplace, but BCI is special because it can ride the wave of virtualization technologies that humans are virtually guaranteed to adopt (see what I did there :). I’m talking about fully immersive virtual reality. The applications for military, business, educational training and entertainment of a high efficacy BCI are truly awe inspiring and could create a substantial economic engine.
And then there are the research benefits. You’ve already put BCI on the spectrum of interfacing technologies which arguably started with the printing press, but BCI could actually be conceived as the upper limit of this spectrum. As high-bandwidth BCI is approached a concurrent task is pre-processing information to improve signal, expert systems are one way of achieving this. The dawn of “Big Data” is spurring more intensive machine learning research and companies like Aysasdi are figuring out techniques like topological data analysis to not only extract meaning from high dimensional data sets, but to render them visually intuitive—this is where the crux of BMI lies.
Imagine full virtual realities in which all of the sensory data being fed into your brain is actually real-world data which has been algorithmically pre-processed to represent some real world problem. For example, novel models could be extracted in real time from a physicists brain as she thinks of them (even before awareness). These models would be immediately simulated all around her, projected through time, and compared to previous models. It is even possible that the abstract symbology of mathematics and language could be made obsolete, though I doubt it.
Betting on such a scenario requires no real paradigm shift, only a continuation of current trends. Thus I am in favor of the “BCI as a transitional technology” hypothesis.
Unfortunately the ‘biological’ part is not too great at recursive self improvement (at the fundamental level). It’s a mess. If we merely wanted to create cyborgs with cognitive advantages then this strategy is a no-brainer (except literally). If we are trying to create a superintelligence then the recursive self improvement feature is more or less obligatory.
Indeed, the mechanical component is far better at recursive self improvement, which is why I wrote:
It also occurs that for a ‘hive mind’ using the enhanced intelligence to acquire the resources to connect more people to the hive mind counts as quantitative, if not qualitative, biological self improvement.
The question is whether that team would more efficient if you give them a high functioning BCI. I’m not sure that’s true.
I’d guess it’d make very little difference in ‘regular’ chess but it would help somewhat in bullet chess.
I read somewhere that Kasparov was considering three moves per second while deep blue considers billions. If you consider a move and it takes a few seconds to enter it into a computer, as opposed to being read from your brain, a analysed, and a preliminary evaluation (enough to check that there are no obvious flaws) returning to your brain within milliseconds, then this seems like a several-fold speed up. True, its a quantitative not qualitative speedup, but then this is just a BCI capable of transmitting thoughts consisting of a few bytes.
What exactly do you mean with the word “consider” in that sentence? I think your trivial idea of what it might mean isn’t accurate enough to be useful in a case like this.
The idea that reading something from the brain takes no time or effect from the brain to parse the thought in a readable way is inaccurate.
As a tangential point, this seems needlessly aggressive and presumptive. I don’t think you can know whether or not I understand cognitive neuroscience based on one paragraph I have written.
There nothing meant to be aggressive in sentence. I might have removed the word “you” but trivial is a pretty accurate word.
The word consider might be straightforward when you speak about someone’s subjective experience about what he’s doing but it’s not straightforward if you speak about the behavior of billions of neurons.
When solving a go life and death problem I might have only a conscious experience of ‘considering’ 2 moves per second but that doesn’t mean that my brain isn’t effectively analysing many more positions and only brings the interesting one’s up to the level of conscious awareness. If you look at the amount of positions a computer has to go through to solve a go life and death problem that’s the best explanation of how humans can effectively solve life and death problems that effectively take millions or billions of moves.
Your post indicates that you can simply transfer a notion of “consider” that we use in daily life to describing the behavior of large amounts of neurons where we are not conscious of what most of the neurons in our head are doing.
Trivial is simply the accurate word for describing a daily life notion of a word.
Well, I didn’t personally interview Kasparov so I don’t know any more than you do about what the article I read meant by ‘consider’ although it seems like a fairly unambiguous word to me. Simply tabooing every word in a discussion isn’t necessary helpful, it can simply slow the conversion down.
Why should this require any extra effort on the part of the brain? Surely if you can read the relevant area of the brain, and have the computing power and understanding of neuroscience required, then any thought is readable. Are you a neuroscientist?
There no way you read billions of neurons accurately at the same time. That just not feasible. The thing you can do is reading a bunch of neurons or reading a signal that aggregates the activity of a bunch of neurons.
Nearly all BCI designs that are used in the real world take effort. A bunch of them even have a learning curve.
Depending how on wide you define the term. I have set in university course on neuroscience that discussed BCI’s.