I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn’t happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar’s AR prediction, and based on the iPhone’s success. That doesn’t seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. “Sixth Sense” type interfaces are interesting, but still strike me as overly intrusive on others’ personal space.
It would make sense to me to be able, say, to subvocalize a command such as “Show me the way to metro station X”, then have my smartphone gently “tug” me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into “augmented humans”.
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don’t mind this comment being downvoted, even as shorthand for “I disagree”, but I’d be genuinely curious to know what flaws you’re seeing in my thinking, or what facts you’re aware of that make my degree of confidence seems way off.
I’m not thrilled about your vagueness about what technologies count as a BCI. Little electrodes? The gaming device that came out last year or so got a lot of hype, but the gamers I’ve talked to who have actually used it were all deeply unimpressed. Voice recognition? Already here in niches, but not really popular.
If you can’t think of what interfaces specifically*, then maybe you should phrase your prediction as a negative: ‘by 2020, >50% of the smart cellphone market will use a non-gestural non-keyboard based interface’ etc.
* and you really should be able to—just 9 years means that any possible tech has to have already been demonstrated in the lab and have a feasible route to commercialization; R&D isn’t that fast a process, and neither is being good & cheap enough to take over the global market to the point of ‘pervasive’
Yep, electrodes, as in the gaming devices. Headsets is the form factor I have in mind, so not necessarily electrodes if this is to be believed. I don’t want to commit to burdensome implementation details but voice isn’t what I mean—it doesn’t count as “unobtrusive” to my way of thinking.
I envision something where I can just form the thought “nearest MacDonalds” (ETA: or somehow bring up a menu selecting that among even a restricted set) without it being conspicuous for an outside observer, and get some form of feedback from the device leading me in the right direction. Visual overlay would work, but so would a physical tug.
I think I’ve come round to Gwern’s point of view—this is a bit too vague. The news item I posted makes me feel like we’re still on track for it to happen, though I could be a few years off the mark. I might knock it down to 65% or so to account for uncertainty in timing.
Given the feasibility that currently exists for gadgets that you envision… and Apple’s uncanny ability to bring those ides to market… I say 2015 is a 75% target for the iThought side-processor device. :) .
I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn’t happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar’s AR prediction, and based on the iPhone’s success. That doesn’t seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. “Sixth Sense” type interfaces are interesting, but still strike me as overly intrusive on others’ personal space.
It would make sense to me to be able, say, to subvocalize a command such as “Show me the way to metro station X”, then have my smartphone gently “tug” me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into “augmented humans”.
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don’t mind this comment being downvoted, even as shorthand for “I disagree”, but I’d be genuinely curious to know what flaws you’re seeing in my thinking, or what facts you’re aware of that make my degree of confidence seems way off.
Ruling this prediction as wrong. (Only three years late, but who’s counting.)
By now this looks rather unlikely in the original time-frame, even though there are still encouraging hints from time to time.
I’m not thrilled about your vagueness about what technologies count as a BCI. Little electrodes? The gaming device that came out last year or so got a lot of hype, but the gamers I’ve talked to who have actually used it were all deeply unimpressed. Voice recognition? Already here in niches, but not really popular.
If you can’t think of what interfaces specifically*, then maybe you should phrase your prediction as a negative: ‘by 2020, >50% of the smart cellphone market will use a non-gestural non-keyboard based interface’ etc.
* and you really should be able to—just 9 years means that any possible tech has to have already been demonstrated in the lab and have a feasible route to commercialization; R&D isn’t that fast a process, and neither is being good & cheap enough to take over the global market to the point of ‘pervasive’
Decoding spoken words using local field potentials recorded from the cortical surface
Yep, electrodes, as in the gaming devices. Headsets is the form factor I have in mind, so not necessarily electrodes if this is to be believed. I don’t want to commit to burdensome implementation details but voice isn’t what I mean—it doesn’t count as “unobtrusive” to my way of thinking.
I envision something where I can just form the thought “nearest MacDonalds” (ETA: or somehow bring up a menu selecting that among even a restricted set) without it being conspicuous for an outside observer, and get some form of feedback from the device leading me in the right direction. Visual overlay would work, but so would a physical tug.
Three and a half years in, this.
Any updates to your original prediction?
Now this.
I think I’ve come round to Gwern’s point of view—this is a bit too vague. The news item I posted makes me feel like we’re still on track for it to happen, though I could be a few years off the mark. I might knock it down to 65% or so to account for uncertainty in timing.
Given the feasibility that currently exists for gadgets that you envision… and Apple’s uncanny ability to bring those ides to market… I say 2015 is a 75% target for the iThought side-processor device. :) .