My guess is that most of the interesting stuff here is bottlenecked on the biotech that determines bandwidth. Most of the interesting stuff needs very many (>millions?) of precise connections, and that’s hard to get safely with big clumsy electrodes. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html
It would be very nice if someone could show that’s wrong, or if someone could figure out how to get many connections faster than the default research.
Electrode-based neurotechnology that conceivably be in humans over the next 5 years have channel counts of hundreds (e.g. Utah arrays) to thousands (e.g. Neuralink, Paradromics), or generously/best-case tens of thousands. In an optimistic scenario, you could spike sort several neurons per contact, but an assumption of one neuron per electrode, on average, is probably about right.
Stevenson and Kording plotted the number of neurons we can record simultaneously as a function of time (over ~60 years). They estimated the doubling time to be ~7.4 years. Mouse brains have ~E8 neurons and human brains have ~E11 neurons. At this rate, assuming we’re starting with 10k neurons today (optimistic), we could expect to record from all neurons in the mouse brain by 2030 and all of the neurons in the human brain by 2104.
Electrodes cannot fully scale
Estimating timelines to scale electrodes to the whole brain are not the whole picture. Electrode technology capable of single neuron recordings is highly invasive. We mention this in the main post, but Markus Meister’s butcher number captures the ratio of the number of neurons destroyed for each neuron recorded. Today’s technologies have very high butcher numbers. For example, the Utah array has a butcher number of ~200 and a Neuropixels probe (limited use in humans) has a butcher number of ~2.5. To scale to whole brain (e.g. for WBE), you’d need a butcher number of zero. I don’t think electrodes are capable of this while maintaining the ability to record single neurons.
How to get many connections faster
I won’t expand on this too much here, other than to highlight the section pasted below from the main post. The next generation of neurotechnology is being actively developed and brings many new biophysical effects into the neurotech mix, with plenty of room for capabilities to grow.
Optical techniques offer high spatial and temporal resolution. Unfortunately, photons scatter in tissue, limiting recording depth. Ultrasound penetrates soft tissue efficiently and is highly sensitive to neuronal function. It’s diffraction-limited to ~100 micron resolution, but super-resolution and genetic engineering techniques are improving spatial resolution and enabling more specific functional measurements. Other approaches based on different biophysical sources of contrast (e.g., magnetics), delivery of these approaches to the brain through novel means (e.g., intravascular), or the combination of multiple techniques, may also contribute to progress in neurotech for AI alignment.
Thanks! You’ve confirmed my fears about the butcher number.
Re/ other methods: I wonder if there are alternate write methods that can plausibly scale to >100,000s of neurons. The enhancements that seem most promising to me involve both reading and writing at massive scale.
My guess is that most of the interesting stuff here is bottlenecked on the biotech that determines bandwidth. Most of the interesting stuff needs very many (>millions?) of precise connections, and that’s hard to get safely with big clumsy electrodes. https://tsvibt.blogspot.com/2022/11/prosthetic-connectivity.html It would be very nice if someone could show that’s wrong, or if someone could figure out how to get many connections faster than the default research.
I agree with this.
Electrodes struggle to scale quickly
Electrode-based neurotechnology that conceivably be in humans over the next 5 years have channel counts of hundreds (e.g. Utah arrays) to thousands (e.g. Neuralink, Paradromics), or generously/best-case tens of thousands. In an optimistic scenario, you could spike sort several neurons per contact, but an assumption of one neuron per electrode, on average, is probably about right.
Stevenson and Kording plotted the number of neurons we can record simultaneously as a function of time (over ~60 years). They estimated the doubling time to be ~7.4 years. Mouse brains have ~E8 neurons and human brains have ~E11 neurons. At this rate, assuming we’re starting with 10k neurons today (optimistic), we could expect to record from all neurons in the mouse brain by 2030 and all of the neurons in the human brain by 2104.
Electrodes cannot fully scale
Estimating timelines to scale electrodes to the whole brain are not the whole picture. Electrode technology capable of single neuron recordings is highly invasive. We mention this in the main post, but Markus Meister’s butcher number captures the ratio of the number of neurons destroyed for each neuron recorded. Today’s technologies have very high butcher numbers. For example, the Utah array has a butcher number of ~200 and a Neuropixels probe (limited use in humans) has a butcher number of ~2.5. To scale to whole brain (e.g. for WBE), you’d need a butcher number of zero. I don’t think electrodes are capable of this while maintaining the ability to record single neurons.
How to get many connections faster
I won’t expand on this too much here, other than to highlight the section pasted below from the main post. The next generation of neurotechnology is being actively developed and brings many new biophysical effects into the neurotech mix, with plenty of room for capabilities to grow.
Thanks! You’ve confirmed my fears about the butcher number.
Re/ other methods: I wonder if there are alternate write methods that can plausibly scale to >100,000s of neurons. The enhancements that seem most promising to me involve both reading and writing at massive scale.