We Haven’t Uploaded Worms
In theory you can upload someone’s mind onto a computer, allowing them to live forever as a digital form of consciousness, just like in the Johnny Depp film Transcendence.
But it’s not just science fiction. Sure, scientists aren’t anywhere near close to achieving such feat with humans (and even if they could, the ethics would be pretty fraught), but now an international team of researchers have managed to do just that with the roundworm Caenorhabditis elegans.
—Science Alert
Uploading an animal, even one as simple as c. elegans would be very impressive. Unfortunately, we’re not there yet. What the people working on Open Worm have done instead is to build a working robot based on the c. elegans and show that it can do some things that the worm can do.
The c. elegans nematode has only 302 neurons, and each nematode has the same fixed pattern. We’ve known this pattern, or connectome, since 1986. [1] In a simple model, each neuron has a threshold and will fire if the weighted sum of its inputs is greater than that threshold. Which means knowing the connections isn’t enough: we also need to know the weights and thresholds. Unfortunately, we haven’t figured out a way to read these values off of real worms. Suzuki et. al. (2005) [2] ran a genetic algorithm to learn values for these parameters that would give a somewhat realistic worm and showed various wormlike behaviors in software. The recent stories about the Open Worm project have been for them doing something similar in hardware. [3]
To see why this isn’t enough, consider that nematodes are capable of learning. Sasakura and Mori (2013) [5] provide a reasonable overview. For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don’t do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can’t learn. They also don’t read weights off of any individual worm, which means we can’t talk about any specific worm as being uploaded.
If this doesn’t count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
(In a 2011 post on what progress with nematodes might tell us about uploading humans I looked at some of this research before. Since then not much has changed with nematode simulation. Moore’s law looks to be doing much worse in 2014 than it did in 2011, however, which makes the prospects for whole brain emulation substantially worse.)
I also posted this on my blog.
[1] The Structure of the Nervous System of the Nematode Caenorhabditis elegans, White et. al. (1986).
[2] A Model of Motor Control of the Nematode C. Elegans With Neuronal Circuits, Suzuki et. al. (2005).
[3] It looks like instead of learning weights Busbice just set them all to +1 (excitatory) and −1 (inhibitory). It’s not clear to me how they knew which connections were which; my best guess is that they’re using the “what happens to work” details from [2]. Their full writeup is [4].
[4] The Robotic Worm, Busbice (2014).
[5] Behavioral Plasticity, Learning, and Memory in C. Elegans, Sasakura and Mori (2013).
- Whole Brain Emulation: No Progress on C. elegans After 10 Years by 1 Oct 2021 21:44 UTC; 223 points) (
- 25 Dec 2014 3:54 UTC; 19 points) 's comment on LINK: Nematode brain uploaded with success by (
- 20 Nov 2014 16:51 UTC; 7 points) 's comment on Link: Simulating C. Elegans by (
- Explanatory power of C elegans neural models by 27 Mar 2020 17:30 UTC; 6 points) (
Agreed, and this is very similar to what I described in my comment on the other post about this here.
Where I disagree is the sole focus on connection strengths or weights. They are certainly important, but synapses are unlikely to be adequately described by just one parameter. Further, local effects like neuropeptides likely play a role.
You’re right: connection strengths are probably not enough on their own. On the other hand, they’re almost certainly necessary and no one has figured out how to read them off of synapses.
Apparently, the neurons of c. elegans don’t even generate action potentials like mammalian neurons, rather their activity is more complicated and fundamentally analog (source).
The linear threshold spiking neuron model used by Busbice may roughly approximate the activity of mammalian neurons, but is likely a bad model of c. elegans neurons.
He’s lucky that he managed to make the robot perform these simple Braitenberg-like behaviors.
Anyone know if there’s been updates to this in the past few years? I made a very brief (30 seconds) attempt to search for information on it but had trouble figuring out what question to ask google.
I just tried https://scholar.google.com/scholar?as_ylo=2015&q=c+elegans+emulation and don’t see anything relevant. I did find Why is There No Successful Whole Brain Simulation (Yet)? from 2019. While I’ve only skimmed it and its reference list, if there had been something new here I think they would have cited it.
I think we’re still stuck on both (a) we can’t read weights from real worms (and so can only model a generic worm) and (b) we don’t understand how weights are changed in real worms (and so can’t model learning).
I discussed this with a professor of neuroscience on Facebook.
Unfortunately seems that the inferential gap was not crossed.
In which direction? :) and do you think you can say anything about what was said in a way that would help close the gap? Thanks!
As arundelo said, it was frustrating how he wouldn’t commit to specific predictions. I get the feeling he had some philosophical idea about “but is a copy of me really me?” that was influencing him even when you were trying to keep things on a more concrete level. (This isn’t totally unreasonable on his part because there are sometimes disputes over this issue even here). Aside from the unlikely-to-be-productive tactic of telling him to read the sequences, perhaps you could have emphasized that you were interested in the objective behaviour and not the “identity″ or subjective experience of the worm? I think you were trying to do that but maybe the contrast could have been more explicit?
Basically, it seems he was jumping ahead from thinking about worm uploads to thinking about human mind uploads and getting tangled up in classical philosophical dilemmas as a result.
Actually, most of that seems like a straightforward false dichotomy (between “connectome” alone and a dynamic model with constant activity). Or I may misunderstand how he’s using the phrase “information flow,” eg it may stand for some technical point that Paul and I don’t understand at all.
I was pretty frustrated by the neuroscience prof’s reluctance to speak in terms of predictions—of what he’d expect to see as the result of some particular experiment—but you did great at politely pushing him in that direction, and I can’t think how you could have done better.
So is the method that the worm uses for learning known? If we know approximately the current weights, and we knew the way those update, what else is needed?
That is a way to prove that the worm was uploaded. But how would you actually do that? What other info is needed to get to that, and how can we get that? Why can’t we test when the neurons fire in order to get the weights out of that? (I get it’s more complicated than that or it would have been done, but don’t get why.)
Also, typo: pysical to physical. Edit: looks fixed, good.
Basically this is electrophysiology research on C elegans. Most of the research being done, AFAIK, is hypothesis testing and doesn’t systematically measure all of the connection strengths at once. Plus then you have the correlation vs causation problem even if you did measure them all at, which is why davidad wanted to do optogenetics, but again AFAIK that didn’t actually get done.
Bottom line: this research is technically difficult and like most research topics is not well funded.
More details: he was planning to engineer a nematode to make neurons give off light when activating and to be light-sensitive so you can activate individual neurons with light. This lets you see which neurons fire in response to others. He wrote:
I believe he’s no longer working on this, however, and the NemaLoad project is stalled. The last update is a year ago and there haven’t been any updates to the project’s github page since April 2014. It does look like davidad contributed to a 2013 paper surveying methods of neural recording, but this seems to mostly be a discussion of theoretical capability based mostly on others’ work than anything learned from NemaLoad experiments.
He wrote, “If I’d had $1 million seed, I wouldn’t have had to cancel the project when I did...” on this Quora answer.
#Civilizationalinadequacy
Great post—I suggest moving it to Main.
Thanks! Done.
3 years on from https://www.lesswrong.com/posts/B5auLtDfQrvwEkw4Q/we-haven-t-uploaded-worms?commentId=Qx5DadETdK8NrtA9S.
Has any progress been made since?
These sort of things seem to happen slowly, then suddenly — very little progress for a long time, then a breakthrough unlocks big jumps in progress.