I predict with moderate confidence that we will not see:
‘Augmented reality’-style overlays or video beamed directly to the visual cortex.
Language output (as text or audio or so on) or input.
Pure tech or design demos without any demonstrations or experiments with real biology.
I predict with weak confidence that we won’t see results in humans. (This prediction is stronger the more invasive the results we’re seeing; a superior EEG they could show off in humans, but repair or treatment of strokes will likely only be in mice.)
(Those strike me as the next milestones along the ‘make BCIs that are useful for making top performers higher performing’ dimension, which seems to be Musk’s long-term vision for Neuralink.)
They’ve mostly been focusing on medical applications. So I predict we will see something closer to:
High-spatial-fidelity brain monitoring (probably invasive?), intended to determine gross functionality of different regions (perhaps useful in conjunction with something like ultrasound to do targeted drug delivery for strokes).
Neural prostheses intended to replace the functionality of single brain regions that have been destroyed. (This seems more likely for regions that are somehow symmetric or simple.)
Results in rats or mice.
I notice I wanted to put ‘dexterous motor control’ on both lists, so I’m somehow confused; it seems like we already have prostheses that perform pretty well based on external nerve sites (like reading off what you wanted to do with your missing hand from nerves in your arm) but I somehow don’t expect us to have the spatial precision or filtering capacity to do that in the brain. (And also it just seems much riskier to attach electrodes internally or to the spinal cord than at an external site, making it unclear why you would even want that.) The main question here for me is something closer to ‘bandwidth’, where it seems likely you can pilot a drone using solely EEG if the thing you’re communicating is closer to “a location that you should be at” than “how quickly each of the four rotors should be spinning in what direction.” But we might have results where rats have learned how to pilot drones using low-level controls, or something cool like that.
Scoring your predictions: it looks like you got all three “not see” predictions right, as well as #1 and #3 from “will see”, with only #2 from “will see” missing (though you had merely predicted we’d see something “closer to” your “will see” list, so missing one doesn’t necessarily mean you were wrong).
The first half of #1 I got right, but I think the second half was more wrong than right. While this might be fast enough to be useful in a crisis, it looks like the design is focused more on getting useful information out of regions rather than the ‘gross functionality’ target I mentioned there.
I think the big result here was that they came up with a way to do deep insertion of wires designed for biocompatibility and longevity, which is impressive and along a dimension I wasn’t tracking too much in my prediction. In retrospect, I might have updated too much on the article I read beforehand, which gave me the sense that this was closer to ‘a medical startup that got Musk’s money’ than ‘the thing Musk said he was trying to do, which will try to be useful along the way’, which is what the white paper looks more like.
I notice I wanted to put ‘dexterous motor control’ on both lists, so I’m somehow confused; it seems like we already have prostheses that perform pretty well based on external nerve sites (like reading off what you wanted to do with your missing hand from nerves in your arm) but I somehow don’t expect us to have the spatial precision or filtering capacity to do that in the brain.
We’ve had prostheses that let people control computer cursors via a connection directly to the brain at least since 2001. Would you not count that as dexterous motor control?
Length of the control vector seems important; there’s lots of ways to use gross signals to control small vectors that don’t scale to controlling large vectors. Basically, you could imagine that question as something like “could you dance with it?” (doable in 2014) or “could you play a piano with it?” (doable in 2018), both of which naively seem more complicated than an (x,y) pair (at least, when you don’t have visual feedback).
I predict with moderate confidence that we will not see:
‘Augmented reality’-style overlays or video beamed directly to the visual cortex.
Language output (as text or audio or so on) or input.
Pure tech or design demos without any demonstrations or experiments with real biology.
I predict with weak confidence that we won’t see results in humans. (This prediction is stronger the more invasive the results we’re seeing; a superior EEG they could show off in humans, but repair or treatment of strokes will likely only be in mice.)
(Those strike me as the next milestones along the ‘make BCIs that are useful for making top performers higher performing’ dimension, which seems to be Musk’s long-term vision for Neuralink.)
They’ve mostly been focusing on medical applications. So I predict we will see something closer to:
High-spatial-fidelity brain monitoring (probably invasive?), intended to determine gross functionality of different regions (perhaps useful in conjunction with something like ultrasound to do targeted drug delivery for strokes).
Neural prostheses intended to replace the functionality of single brain regions that have been destroyed. (This seems more likely for regions that are somehow symmetric or simple.)
Results in rats or mice.
I notice I wanted to put ‘dexterous motor control’ on both lists, so I’m somehow confused; it seems like we already have prostheses that perform pretty well based on external nerve sites (like reading off what you wanted to do with your missing hand from nerves in your arm) but I somehow don’t expect us to have the spatial precision or filtering capacity to do that in the brain. (And also it just seems much riskier to attach electrodes internally or to the spinal cord than at an external site, making it unclear why you would even want that.) The main question here for me is something closer to ‘bandwidth’, where it seems likely you can pilot a drone using solely EEG if the thing you’re communicating is closer to “a location that you should be at” than “how quickly each of the four rotors should be spinning in what direction.” But we might have results where rats have learned how to pilot drones using low-level controls, or something cool like that.
Scoring your predictions: it looks like you got all three “not see” predictions right, as well as #1 and #3 from “will see”, with only #2 from “will see” missing (though you had merely predicted we’d see something “closer to” your “will see” list, so missing one doesn’t necessarily mean you were wrong).
The first half of #1 I got right, but I think the second half was more wrong than right. While this might be fast enough to be useful in a crisis, it looks like the design is focused more on getting useful information out of regions rather than the ‘gross functionality’ target I mentioned there.
I think the big result here was that they came up with a way to do deep insertion of wires designed for biocompatibility and longevity, which is impressive and along a dimension I wasn’t tracking too much in my prediction. In retrospect, I might have updated too much on the article I read beforehand, which gave me the sense that this was closer to ‘a medical startup that got Musk’s money’ than ‘the thing Musk said he was trying to do, which will try to be useful along the way’, which is what the white paper looks more like.
We’ve had prostheses that let people control computer cursors via a connection directly to the brain at least since 2001. Would you not count that as dexterous motor control?
Length of the control vector seems important; there’s lots of ways to use gross signals to control small vectors that don’t scale to controlling large vectors. Basically, you could imagine that question as something like “could you dance with it?” (doable in 2014) or “could you play a piano with it?” (doable in 2018), both of which naively seem more complicated than an (x,y) pair (at least, when you don’t have visual feedback).