It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said “this is a 100% accurate and working simulation of the worm.” How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say “this emulated brain does/doesn’t produce speech that resembles Kurzweil’s speech.” What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?
To see why this isn’t enough, consider that nematodes are capable of learning. [...] For example, nematodes can learn that a certain temperature indicates food, and then seek out that temperature. They don’t do this by growing new neurons or connections, they have to be updating their connection weights. All the existing worm simulations treat weights as fixed, which means they can’t learn. They also don’t read weights off of any individual worm, which means we can’t talk about any specific worm as being uploaded.
If this doesn’t count as uploading a worm, however, what would? Consider an experiment where someone trains one group of worms to respond to stimulus one way and another group to respond the other way. Both groups are then scanned and simulated on the computer. If the simulated worms responded to simulated stimulus the same way their physical versions had, that would be good progress. Additionally you would want to demonstrate that similar learning was possible in the simulated environment.
A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don’t know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it’s some evidence it’s the same worm. To make it more granular there’s a lot of learning tasks from behavioural neuroscience you could adapt.
You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction.
Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.
I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you’re restarting a forgotten discussion.
It seems to me that this task has an unclear goal. Imagine I linked you a github repo and said “this is a 100% accurate and working simulation of the worm.” How would you verify that? If we had a WBE of Ray Kurzweil, we could at least say “this emulated brain does/doesn’t produce speech that resembles Kurzweil’s speech.” What can you say about the emulated worm? Does it wiggle in some recognizable way? Does it move towards the scent of food?
Quote jefftk.
(just included the quotation in my post)
A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.
Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans
You could imagine an experimental designs where you train worms to accomplish a particular task (i.e. learn which scent means food. If indeed they use scent. I don’t know.) You then upload both trained and untrained worms. If trained uploads perform better from the get-go than untrained ones, it’s some evidence it’s the same worm. To make it more granular there’s a lot of learning tasks from behavioural neuroscience you could adapt.
You could also do single neuron studies: train the worm on some task, find a neuron that seems to correspond to a particular abstraction. Upload worm; check that the neuron still corresponds to the abstraction.
Or ablation studies: you selectively impair certain neurons in the live trained worm, uploaded worm, and control worms, in a way that causes the same behaviour change only in the target individual.
Or you can get causal evidence via optogenetics.
Optogenetics was exactly the method proposed by David, I just updated the article and included a full quote.
I originally thought my post was already a mere summary of the previous LW posts by jefftk, excessive quotation could make it too unoriginal, interested readers could simply read more by following the links. But I just realized giving sufficient context is important when you’re restarting a forgotten discussion.