To my knowledge, the most recent c. elegans model was all the way back in 2009
it is this PhD thesis which I admit I have not read in its entirety.
I found on the OpenWorm history page which is looking rather sparse unfortunately.
I was trying to go through everything they have, but again, was very disillusioned after trying to fully replicate + expand this paper on chemotaxis. You can read more about what I did here on my personal site. It’s pretty lengthy so the TL;DR is that I tried to convert his highly idealized model back into explicit neuron models and it just didn’t really work. Explicitly modeling c elegans in any capacity would be a great project because there is so much published, you can copy others and fill in details or abstract as you wish. There is even an OpenWorm slack but I don’t remember how to join + it’s relatively inactive.
That is more than enough stuff to keep you busy but if you want to hear me complain about learning rules read on.
I am really frustrated with learning rules for a couple reasons. The biggest one is that researchers just don’t seem to have very much follow through on the obvious next steps. either that or I’m really bad at finding/reading papers. In any case, what I would love to work on/read about a learning algorithm that
Uses only local information + maybe some global reward function (as in, it can’t be some complicated error minimizer like backpropagation, people generally call this biologically plausible)
Has experimental evidence that real neurons really learn like this
Can do well on one shot learning tasks (fear/avoidance can be learned from single negative stimuli even in really simple animals)
Performs well on general learning, as an example, I tried to recreate tuning curves with LIF neurons using the BCM rule + homeostasis, it was really easy to get a population of neurons to respond differently to horizontal vs vertical sine waves but if those sine waves had a phase shift it basically completely failed.
Work with deep/complex recurrent architecture
From what I can tell, many papers address one or two of these but fail to capture everything. Maybe I’m being too greedy, but I feel like this list is pretty sensible for a minimum of whatever learning algorithms are at play in the brain.
I am going to work on the project I outline here but I would genuinely love to help you even if it’s just bouncing ideas off me. Be warned, I also am not formally trained in a lot of neuroscience so take everything I say with a heap of salt.
Are you stepping away as to not be dependent, to not lose some part of the human experience no matter how reliable these tools become, or simple for practical reasons?
Also, do you find LLMs to be an effective tool in interpersonal relationships? You call it a cheat code and that sounds much better than my personal experience so far although I have mostly been using them for debugging code rather than social situations.