Whole Brain Emulation: Looking At Progress On C. elgans
Being able to treat the pattern of someone’s brain as software to be run on a computer, perhaps in parallel or at a large speedup, would have a huge impact, both socially and economically. Robin Hanson thinks it is the most likely route to artificial intelligence. Anders Sandberg and Nick Bostrom of the Future Of Humanity Institute created out a roadmap for whole brain emulation in 2008, which covers a huge amount of research in this direction, combined with some scale analysis of the difficulty of various tasks.
Because the human brain is so large, and we are so far from having the technical capacity to scan or emulate it, it’s difficult to evaluate progress. Some other organisms, however, have much smaller brains: the nematode C. elegans has only 302 cells in its entire nervous system. It is extremely well studied and well understood, having gone through heavy use as a research animal for decades. Since at least 1986 we’ve known the full neural connectivity of C. elegans, something that would take decades and a huge amount of work to get for humans. At 302 neurons, simulation has been within our computational capacity for at least that long. With 25 years to work on it, shouldn’t we be able to ‘upload’ a nematode by now?
Reading through the research, there’s been some work on modeling subsystems and components, but I only find three projects that have tried to integrate this research into a complete simulation: the University of Oregon’s NemaSys (~1997), the Perfect C. elegans Project (~1998), and Hiroshima University’s Virtual C. Elegans project (~2004). The second two don’t have web pages, but they did put out papers: [1], [2], [3].
Another way to look at this is to list the researchers who seem to have been involved with C. elegans emulation. I find:
Hiroaki Kitano, Sony [1]
Shugo Hamahashi, Keio University [1]
Sean Luke, University of Maryland [1]
Michiyo Suzuki, Hiroshima University [2][3]
Takeshi Goto, Hiroshima Univeristy [2]
Toshio Tsuji, Hiroshima Univeristy [2][3]
Hisao Ohtake, Hiroshima Univeristy [2]
Thomas Ferree, University of Oregon [4][5][6][7]
Ben Marcotte, University of Oregon [5]
Sean Lockery, University of Oregon [4][5][6][7]
Thomas Morse, University of Oregon [4]
Stephen Wicks, University of British Columbia [8]
Chris Roehrig, University of British Columbia [8]
Catharine Rankin, University of British Columbia [8]
Angelo Cangelosi, Rome Instituite of Psychology [9]
Domenico Parisi, Rome Instituite of Psychology [9]
This seems like a research area where you have multiple groups working at different universities, trying for a while, and then moving on. None of the simulation projects have gotten very far: their emulations are not complete and have some pieces filled in by guesswork, genetic algorithms, or other artificial sources. I was optimistic about finding successful simulation projects before I started trying to find one, but now that I haven’t, my estimate of how hard whole brain emulation would be has gone up significantly. While I wouldn’t say whole brain emulation could never happen, this looks to me like it is a very long way out, probably hundreds of years.
Note: I later reorganized this into a blog post, incorporating some feed back from these comments.
Papers:
[1] The Perfect C. elegans Project: An Initial Report (1998)
[2] A Dynamic Body Model of the Nematode C. elegans With Neural Oscillators (2005)
[3] A model of motor control of the nematode C. elegans with neuronal circuits (2005)
[4] Robust spacial navigation in a robot inspired by C. elegans (1998)
[5] Neural network models of chemotaxis in the nematode C. elegans (1997)
[6] Chemotaxis control by linear recurrent networks (1998)
[7] Computational rules for chemotaxis in the nematode C. elegans (1999)
[9] A Neural Network Model of Caenorhabditis Elegans: The Circuit of Touch Sensitivity (1997)
- Whole Brain Emulation: No Progress on C. elegans After 10 Years by 1 Oct 2021 21:44 UTC; 223 points) (
- Brain Preservation by 28 Mar 2012 12:56 UTC; 33 points) (
- 13 May 2012 6:42 UTC; 14 points) 's comment on Neuroimaging as alternative/supplement to cryonics? by (
- Explanatory power of C elegans neural models by 27 Mar 2020 17:30 UTC; 6 points) (
- 27 Nov 2011 19:59 UTC; 6 points) 's comment on Against WBE (Whole Brain Emulation) by (
- Value of a Computational Process? by 9 Jul 2012 17:33 UTC; 4 points) (
- 28 Mar 2012 15:28 UTC; 2 points) 's comment on Brain Preservation by (
- 12 Jul 2012 14:51 UTC; 1 point) 's comment on IJMC Mind Uploading Special Issue published by (
- 24 Oct 2012 10:44 UTC; 0 points) 's comment on [LINK] blog on cryonics by someone who freezes things in a cell bio lab by (
Hi all,
Glad there’s excitement on this subject. I’m currently coordinating an open source project whose goal is to do a full simulation of the c. elegans (http://openworm.googlecode.com). More on that in a minute.
If you are surveying past c. elegans simulation efforts, you should be sure not to leave out the following:
A Biologically Accurate 3D Model of the Locomotion of Caenorhabditis Elegans, Roger Mailler, U. Tulsa http://j.mp/toeAR8
C. Elegans Locomotion: An integrated Approach—Jordan Boyle, U. Leeds http://j.mp/fqKPEw
Back to Open Worm. We’ve just published a structural model of all 302 neurons (http://code.google.com/p/openworm/wiki/CElegansNeuroML) represented as NeuroML (http://neuroml.org). NeuroML allows the representation of multi-compartmental models of neurons (http://en.wikipedia.org/wiki/Biological_neuron_models#Compartmental_models). We are using this as a foundation to overlay the c. elegans connectivity graph and then add as much as we can find about the biophysics of the neurons. We believe this represents the first open source attempt to reverse-engineer the c. elegans connectome.
One of the comments mentioned Andrey Palyanov’s mechanical model of the c. elegans. He is part of our group and is currently focused on moving to a soft-body simulation framework rather than the rigid one they created here: http://www.youtube.com/watch?feature=player_embedded&v=3uV3yTmUlgo Our first goal is to combine the neuronal model with this physical model in order to go beyond the biophysical realism that has already been done in previous studies. The physical model will then serve as the “read out” to make sure that the neurons are doing appropriate things.
Our roadmap for the project is available here: http://code.google.com/p/openworm/wiki/Roadmap
We have a mailing list here: http://groups.google.com/group/openworm
We have regular meetings on Google+ Hangout. If you want to help, we can surely find a way to include you. If you are interested, please let us know and we’ll loop you in.
Cheers, Stephen
David Dalrymple is also trying to emulate all of C. elegans, and was at the Singularity Summit.
http://syntheticneurobiology.org/people/display/144/26
That’s me. In short form, my justification for working on such a project where many have failed before me is:
The “connectome” of C. elegans is not actually very helpful information for emulating it. Contrary to popular belief, connectomes are not the biological equivalent of circuit schematics. Connectomes are the biological equivalent of what you’d get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.
What you actually need is to functionally characterize the system’s dynamics by performing thousands of perturbations to individual neurons and recording the results on the network, in a fast feedback loop with a very very good statistical modeling framework which decides what perturbation to try next.
With optogenetic techniques, we are just at the point where it’s not an outrageous proposal to reach for the capability to read and write to anywhere in a living C. elegans nervous system, using a high-throughput automated system. It has some pretty handy properties, like being transparent, essentially clonal, and easily transformed. It also has less handy properties, like being a cylindrical lens, being three-dimensional at all, and having minimal symmetry in its nervous system. However, I am optimistic that all these problems can be overcome by suitably clever optical and computational tricks.
I’m a disciple of Kurzweil, and as such I’m prone to putting ridiculously near-future dates on major breakthroughs. In particular, I expect to be finished with C. elegans in 2-3 years. I would be Extremely Suprised, for whatever that’s worth, if this is still an open problem in 2020.
How would you nail those two predictions down into something I could register on PredictionBook.com?
“A complete functional simulation of the C. elegans nervous system will exist on 2014-06-08.” 76% confidence
“A complete functional simulation of the C. elegans nervous system will exist on 2020-01-01.” 99.8% confidence
Any thoughts on this today?
See here
I’m curious where you’d estimate 50% chance of it existing and where you’d estimate 90%.
The jump from 76% to 99.8% is to my mind striking for a variety of reasons. Among other concerns, I suspect that many people here would put a greater than 0.2% chance of some sort of extreme civilization disrupting event above that. Assuming a 0. 2% chance of a civilization disrupting event in an 8 year period is roughly the same as a 2% chance of such an event occurring in the next hundred years which doesn’t look to be so unreasonable but for the fact that longer term predictions should have more uncertainty. Overall, a 0.2% chance of disruption seems to be too high, and if your probability model is accurate then one should expect the functional simulation to arrive well before then. But note also that civilization collapsing is not the only thing that could easily block this sort of event. Events much smaller than full on collapse could do it also, as could many more mundane issues.
That high an estimate seems to be likely vulnerable to the planning fallacy.
Overall, your estimate seems to be too confident, the 2020 estimate especially so.
I would put something like a 0.04% chance on a neuroscience disrupting event (including a biology disrupting event, or a science disrupting event, or a civilization disrupting event). I put something like a 0.16% chance on uploading the nematode actually being so hard that it takes 8 years. I totally buy that this estimate is a planning fallacy. Unfortunately, being aware of the planning fallacy does not make it go away.
True. But there are ways to calibrate for it. It seems that subtracting off 10-15% for technological predictions works well. If one is being more careful it probably would do something that was more careful, say taking not a fixed percentage but something that became less severe as the probability estimate of the event went up, so that one could still have genuinely high confidence intervals. But if one is in doubt simply reducing the probability until it doesn’t look like the planning fallacy is likely is one way to approach things.
Bleh, I see I was again unclear about what I meant by nailing down—more precisely, how would one judge whatever has been accomplished by 2014/2020 as being ‘complete’ or ‘functional’? Frequently there are edge cases (there’s this paper reporting one group’s abandoned simulation which seemed complete oh except for this wave pattern didn’t show up and they had to simplify that...). But since you were good enough to write them:
http://predictionbook.com/predictions/4123
http://predictionbook.com/predictions/4124
Ah, I see. This is the sort of question that the X Prize Foundation has to wrestle with routinely. It generally takes a few months of work to take even a relatively clear problem statement and boil it down to a purely objective judging procedure. Since I already have an oracle for what I it is I want to develop (does it feel satisfying to me?), and I’m not trying to incentivize other people to do it for me, I’m not convinced that I should do said work for the C. elegans upload project. I’m not even particularly interested in formalizing my prediction for futurological purposes since it’s probably planning fallacy anyway. However, I’m open to arguments to the contrary.
For your information, the above two links were judged as wrong
@davidad, any updates on your work?
See https://www.jefftk.com/p/we-havent-uploaded-worms and Why Is There No Successful Whole Brain Simulation Yet (2019)
Well, that’s fine. I’ve make done with worse predictions than that.
(Which paper are you referring to?)
That was just a rhetorical example; I don’t actually know what the edge cases will be in advance.
99.8% confidence—can I bet with you at those odds?
Thoughts on that first prediction?
Given the wild unwarranted optimism an average PhD student has in the first year or two of their research, I would expect that David will have enough to graduate 5 or 6 years after he started, but the outcome will not be anywhere close to the original goal, thus
90% that “No whole brain emulation of C. elegans by 2015”
Then again, he is not your average PhD student (the youngest person to ever start a graduate program at MIT—take that, Sheldon!), so I hope to be proven wrong.
What’re you folks up to now? Have you updated because you were “Extremely Surprised”? What do the major challenges appear to be these days, and what year would you again be “Extremely Surprised … if this is still an open problem”?
Ha, I’ll keep an eye out for your publications. I’m particularly interested at the distance you’ll have to go in gathering data, and what will you be able to make out of what is known. I expect that scans aiming for connectome description contain some neuron type data already due to morphological differences in neurons. I don’t know what sets of sensors are used for those scans, but maybe getting a broader spectrum could provide clues as to what neuron types occupy which space inside the connectome. SEM can, after all, determine the chemical composition of materials, can’t it?. As-is, this seems a pretty neckbreaking undertaking, although I wish you the best of luck.
In other news, there is, luckily, more and more work in this field: http://www.theverge.com/2011/11/16/2565638/mit-neural-connectivity-silicon-synapse
Predictions for silicon-based processors are pretty optimistic as well—Intel aims to achieve 10nm by 2014, and similar date is pushed by nVidia. Past that date we may see some major leaps in available technology (or not), and development of multi-processor computation algorithms is finally gaining momentum since Von Neumann’s Big Mistake.
Maybe the Kurzweil’s 2025 date for brain emulation is a bit overoptimistic, but I don’t expect that to take much longer. I do think that the first dozen of successful neural structure emulations will become a significant breakthrough, and we’ll see a rapid expansion similar to that in genetic sciences not so long ago.
“Connectomes are the biological equivalent of what you’d get if you removed all the component symbols from a circuit schematic and left only the wires. Good luck trying to reproduce the original functionality from that data.”
This suggests that even a full 5nm SEM imaging pass over the brain would not be enough information about the individual to emulate them.
It’s worth noting that a 5nm SEM imaging pass will give you loads more information than a connectome, especially in combination with fancy staining techniques. It just so happens that most people doing SEM imaging intend to extract a connectome from the results.
That said, given the current state of knowledge, I don’t think there’s good reason to expect any one particular imaging technology currently known to man to be capable of producing a human upload. It may turn out that as we learn more about stereotypical human neural circuits, we’ll see that certain morphological features are very good predictors of important parameters. It may be that we can develop a stain whose distribution is a very a good predictor of important parameters. Since we don’t even know what the important parameters are, even in C. elegans, let alone mammalian cortex, it’s hard to say with confidence that SEM will capture them.
However, none of this significantly impacts my confidence that human uploads will exist within my lifetime. It is an a priori expected feature of technologies that are a few breakthroughs away that it’s hard to say what they’ll look like yet.
Here we are now, what would you comment on the progress of C. Elegans emulation in general and of your particular approach?
Am I hearing hints of Tononi here?
It’s fair to say that I am confident Tononi is on to something (although whether that thing deserves the label “consciousness” is a matter about which I am less confident). However, Tononi doesn’t seem to have any particular interest in emulation, nor do the available tools for interfacing to live human brains have anything like the resolution that I’d expect to be necessary to get enough information for any sort of emulation.
Maybe a more troubling situation for the feasibility of human brain emulation would be if we had had nematode emulation working for a decade or more but had made no apparent headway to emulating the next level of still not very impressive neural complexity, like a snail. At the moment there’s still the possibility we’re just missing some kind of methodological breakthrough, and once that’s achieved there’s going to be a massive push towards quickly developing emulations for more complex animals.
I think you are right on. I would extend your comment a bit which is to say we are not just missing a methodological breakthrough, but we are not even really attempting to develop the methods necessary. The problem is not just scientific but also what is considered to be science that is worth funding.
http://www.computerra.ru/interactive/589824 A. Palianov now works in Russia on nematode brain emulation project
Don’t forget the vid: http://www.youtube.com/watch?v=3uV3yTmUlgo
Does this assessment take into account the possibility of intermediate acceleration of human cognition?
It doesn’t.
I wrote to Ken Hayworth who is a neuroscience researcher working on scanning and interested in whole brain emulation, and he wrote back:
Further exchange:
Me:
Hayworth:
I would respectfully disagree with Dr. Hayworth.
I would challenge him to show a “well characterized and mapped out part of the mammalian brain” that has a fraction of the detail that is known in c. elegans already. Moreover, the prospect of building a simulation requires that you can constrain the inputs and the outputs to the simulation. While this is a hard problem in c. elegans, its orders of magnitude more difficult to do well in a mammalian system.
There is still no retina connectome to work with (c. elegans has it). There are debates about cell types in retina (c. elegans has unique names for all cells). The gene expression maps of retina are not registered into a common space (c. elegans has that). The ability to do calcium imaging in retina is expensive (orders of magnitude easier in c. elegans). Genetic manipulation in mouse retina is expensive and takes months to produce specific mutants (you can feed c. elegans RNAi and make a mutant immediately).
There are methods now, along the lines of GFP (http://en.wikipedia.org/wiki/Green_fluorescent_protein) to “read the signs of synapses”. There is just very little funding interest from Government funding agencies to apply them to c. elegans. David Hall is one of the few who is pushing this kind of mapping work in c. elegans forward.
What confuses this debate is that unless you study neuroscience deeply it is hard to tell the “known unknowns” apart from the “unknown unknowns”. Biology isn’t solved, so there are a lot of “unknown unknowns”. Even with that, there are plenty of funded efforts in biology and neuroscience to do simulations. However, in c. elegans there are likely to be many fewer “unknown unknowns” because we have a lot more comprehensive data about its biology than we do for any other species.
Building simulations of biological systems helps to assemble what you know, but can also allow you to rationally work with the “known unknowns”. The “signs of synapses” is an example of known unknowns—we can fit those into a simulation engine without precise answers today and fill them in tomorrow. The statement that no one should start simulating the worm based on the current data has no merit when you consider that there is a lot to be done just to get to a framework that has the capacity to organize the “known unknowns” so that we can actually do something useful with them once they have them. More importantly, it makes the gaps a lot more clear. Right now, in the absence of any c. elegans simulations, data are being generated without a focused purpose of feeding into a global computational framework of understanding c. elegans behavior. I would argue that the field would be much better off collecting data in the context of adding to the gaps of a simulation, rather than everyone working at cross purposes.
That’s why we are working on this challenge of building not just a c. elegans simulations, but a general framework for doing so, over at the Open Worm project (http://openworm.googlecode.com).
Unbounded Scales, Huge Jury Awards, & Futurism:
My reasoning for saying hundreds of years was that this very simple subproblem has taken us over 25 years. Say we’ll solve it in another ten. The amount of discovery and innovation needed to simulate a nematode seems maybe 1/100th as much as for a person. Naively this would say 100 (25+10). More people would probably work on this if we had initial successes and it looked practical, though. Maybe this gives us a 10x boost? Which still is (100/10) (25+10) or ~350 years.
Very wide error bars, though.
You must have been very surprised by the progress pattern of the Human Genome Project, then. It’s as if 90% of the real work was about developing the right methods rather than simply plugging along at the initial slow pace.
I’m not sure what you’re responding to. I wasn’t trying to say that the human brain was only 100x the size or complexity of a nematode’s brain-like-thing. It’s far larger and more complex than that. I was saying that even once we have a nematode simulated, we still have done only ~1% of the “real work” of developing the right methods.
I understand that this is your intuition, but I haven’t seen any good evidence for it.
The evidence I have that the methods developed for the nematode are dramatically insufficient to apply to people:
nematodes are transparent
they’re thin and so easy to get chemicals to all of them at once
their inputs and outputs are small enough to fully characterize
their neural structure doesn’t change at runtime
while they do learn, they don’t learn very much
It’s not strong evidence, I agree. I’d like to get a better estimate here.
This lecture on uploading C. elegans is very relevant.
(In short, biophysicists have known where the neurons are located for a long time, but they’ve only just recently developed the ability to analyze the way they affect one another, and so there’s fresh hope of “solving” the worm’s brain. The new methods are also pretty awesome.)
My intuition is that most of the difficulty comes from the complexity of the individual cells- we don’t understand nearly all of the relevant things they do that affect neural firing. This is basically independent of how many neurons there are or how they’re wired, so I expect that correctly emulating a nematode brain would only happen when we’re quite close to emulating larger brains.
If the “complicated wiring” problem were the biggest hurdle, then you’d expect a long gap between emulating a nematode and emulating a human.
Are these projects about emulation? The Oregon and Rome projects seem to treat the brain as a black box, rather than taking advantage of Brenner’s connectome. I’m not sure about the others. That doesn’t tell us much about the difficulty of emulation, except that they thought their projects were easier.
Brenner’s connectome is not enough information. At the very least, you need to know whether synapses are exciting or inhibiting. This pretty much needs to be measured, which is rather different than what Brenner did. It might not require a lot of measurement: once you’ve measured a few, maybe you can recognize the others. Or maybe not.
The oregon one looks to me like it was about emulation: each of the 302 neurons will be implemented according to available anatomical and physiological data.
The rome one I think you may be right.
Is the nematode too small to measure whether synapses are exciting on inhibiting?
I was basing my judgement on the Oregon papers. I suppose that there may be emulation attempts lurking behind other non-emulation papers.
It’s also possible they only proposed to do emulation, but never got funded.
How well can a single neuron or a few neurons be simulated? If we have good working models of those, which behave as we see in life, then that means WBE might be harder, if no such models yet exist, then the failures to model a 302-neuron system are not such good evidence for difficulty.
There are many models of neurons, at many levels of detail. I think that the Neuron program uses the finest detail of any existing software.
I see the primary purpose of a simulating a nematode as measuring how well such models actually work. If they do work, it also lets us estimate the amount of detail needed, but the first question is whether these models are biologically realistic. An easier task would be to test whether the models accurately describe a bunch of neurons in a petri dish. The drawback of such an approach is that it is not clear what it would mean for a model to be adequate for that purpose, whereas in a organism we know what constitutes biologically meaningless noise. Also, realistic networks probably suppress certain kinds of noise.
When I googled for information on neuron emulation, that site came up as the first hit. I’ve used the search box to look for ‘elegans’ and ‘nematode’ - both 0 hits, so I figure no-one is discussing that stuff on their forum.
There is a good review of strategies for building computational models of neurons here:
http://www.ncbi.nlm.nih.gov/pubmed/17629781
What kind of reasoning leads you to this time estimate? Hundreds of years is an awfully long time—consider that two hundred years ago nobody even knew that cells existed, and there didn’t exist any kind of computers.
From your description of the state of the field, I guess we won’t see an uploaded nematode very soon, but getting there in a decade or two doesn’t seem impossible. It seems a bit counter-intuitive to me that learning “no nematode know, but maybe in ten years” would move the point estimate for human uploads by several centuries. Because, what if we had happened to do this literature survey ten years later, and found out that indeed nematodes had been successfully uploaded? If the estimate is sensitive to very small changes like that, it must be very uncertain.
Humans are notoriously poor at providing estimates of probability, and our ability to accurately predict scales that are less than immediate are just as poor. It seems likely that this “hundreds of years” was a short-hand for “there does not seem to be a direct roadmap to achieving this goal from where we currently are, and therefore I must assign an arbitrarily distant point into the future as its most-likely-to-be-achieved date.”
This is purely guesswork / projection on my part, however.
I was disappointed when I first looked into the C. elegans emulation progress. Now I’m not so sure it’s a bad sign. It seems to me that at only 302 neurons the nervous system is probably far from the dominant system of the organism. Even with a perfect emulation of the neurons, it’s not clear to me if the resulting model would be meaningful in any way. You would need to model the whole organism, and that seems very hard.
Contrast that with a mammal, where the brain is sophisticated enough to do things independently of feedback from the body, and where we can see these larges scale neural patterns with scanners. If we uploaded a mouse brain, presumably we could get a rough idea that the emulation was working without ever hooking it up to a virtual body.
The lobster stomach ganglion, 30 neurons, but a ton of synapses, might be better for since its input and output are probably cleaner.
Modeling lobster stomach ganglion work is going on at Brandeis and what they are finding is important: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2913134&tool=pmcentrez&rendertype=abstract
Given the results they are finding, and building on their methods it is not inappropriate to start thinking one level up to c. elegans
Also because there’s fictional prior art?
Maybe there’s fictional prior art because the lobster stomack might be better.
If you’re talking about Charlie Stross’s Lobsters, yes this was inspired by Henry Abarbanel’s work. He ran around the office going “They’re uploading lobsters in San Diego!”
“You would need to model the whole organism, and that seems very hard.”
There are only ~100 muscle cells. People are trying to model the the brain-body combination, but that doesn’t sound unreasonably hard to me.
You need more than just muscle cells to do a whole-body emulation here—c, elegans has roughly 1000 cells all told (varies depending on sex; hermaphrodites have somewhat fewer, males somewhat more).
I’ve reorganized this into a blog post incorporating what I’ve learned in the comments here.
Could you be explicit about what you learned? I can’t tell from comparing the two posts.
Most of the blog post version is just reorganization and context for a different audience, but there are some changes reflecting learning about who is working on this. Specifically, I didn’t know before about the OpenWorm project, Stephen Larson, David Dalrymple, or the 2009 and 2010 body model papers. While I think in a few years I’ll be able to update my predictions based on their experiences, this new information about people currently working on the project didn’t affect my understanding of how difficult or far away nematode simulation or WBE is.
Couldn’t you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM’s is easier to quantify.
This depends on whether the problem is the basic complexity of modeling a neural network or learning how to do it. If the former, then we may be looking at a long time. But if it’s the latter, then we really just need more attempts, successful or not, to learn from and a framework which allows a leap in understanding could arrive.
I don’t know that repeatedly doing the wrong thing will help inform us how to do the right thing. This seems counterfactual to me. Certainly it informs us what the wrong thing is, but… without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won’t lead us to deeper insights as to what we don’t. The question becomes: how do we discern that missing information?
Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to ‘knowing enough for government work’.
Everything that fails does for a reason and in a way. In engineering, mere bugs aside, everything fails at the frontier of our knowledge and our failures carry information about the shape of that frontier back to us. We learn what problems need to be overcome and can, with many failures, generalize what the overall frontier is like, connect its problems and create concepts which solve many at once.
Oh, absolutely. But if they keep failing for the same reason and in the same way, re-running the simulations doesn’t get you any unique or novel information. It only reinforces what you already know.
I acknowledged this as I said, “Emulations are certainly a vital part of that process, however: without them we cannot properly guage how close we are to ‘knowing enough for government work’.”
I think the problem here is that you think that each instance of a simulation is actually an “attempt”. A simulation is a model of some behavior; unlike climbing Everest (which I did in 2003), taming Pegasus (in −642) or repelling the Golden Horde (1257 − 1324, when I was called away on urgent business in Stockholm), each run of a model is a trial, not an attempt. Each iteration of the model is an attempt, as is each new model.
We need more attempts. We learn something different from each one.
No, the problem here is more that I don’t believe that it is any longer feasible to run a simulation and attempt to extract new information without direct observation of the simulated subject-matter.
Yes, absolutely. But I don’t believe we can do anything other than repeat the past by building models based on modeled output without direct observation at this time.
So why not just say “to clarify, I believe that we do not have enough knowledge of C. elegans’ neuroanatomy to build new models at this time. We need to devote more work to studying that before we can build newer models”? That’s a perfectly valid objection, but it contradicts your original post, which states that C. elegans is well understood neurologically.
If you believe that we cannot build effective models “without [additional] direct observation”, then you have done two things: you’ve objected to the consensus that C. elegans is well understood and provided a criterion (and effective upload model of its neuroanatomy) for judging how well we understand.
My original post stated, “without additional effort to more finely emulate the real-time biochemical actions of neurons, it seems that emulating what we already know won’t lead us to deeper insights as to what we don’t.”
Your assertion (in-line quoted, this comment) is false. I said what I meant the first time ’round: we don’t know enough about how neurons work yet and without that understanding any models we build now won’t yield us any new insights into how they do.
This, furthermore, has nothing to do with C. elegans in specific.
Since the goal of these models is to emulate the behavior of C. elegans, and the models do not yet do this, it is clear that one of two things is true: either we do not understand C. elegans or we do not understand neurobiology sufficiently to achieve this goal.
I have made my assertion as to which this is, I have done so quite explicitly, and I have been consistent and clear in this from my first post in this thread.
So where’s the confusion?
“The first time around” for the OPer is the OP, from which it is absent and in which you identify the problem as incomplete attempts.
I am not jkaufman. So I don’t know that I follow what you’re trying to say here. This means that either you or I are confused. In either case, no successful communication is currently occurring.
Could you please clarify what it is you’re trying to say?
Nothing to clarify, actually. I apologize; I’ve been busy and the header switch occasioned by using the context link threw me. It changes the title to “XXXX comments on YYYY”. Not being someone who comments consistently, this tends to make me mistake who originally posted because it plants an association between the person I’m replying to and the title of the post.
Ahh. Much is explained. :)
Well, hopefully this incident will serve to reinforce this particular tidbit and prevent you from having a repeat occurrance.
Maybe. I read a massive quantity of material daily, on the order of 80-90,000 words some weeks. This is combined with comment across a variety of forums and fields. I rely heavily on cues from websites to keep straight who I’m talking to and that I’m even on the right submission forms when I say something.
Typo in the title!
fixed
IBM claims to be doing a cat brain equivalent simulation at the moment, albeit 600 time slower and not all parts of the brain.
Henry Markram of the Blue Brain Project described this claim as a “hoax and a PR stunt”, “shameful and unethical”, and “mass deception of the public”.
Any new developments on the C. Elegans simulation in the past 3+ years?
I’m curious about the same thing as [deleted].