Synthetic Neuroscience

Abstract: Neuroscience as engineering—understanding how the brain works by building simplified, effective models of it, while requiring the model to behave as a real animal would.

Motivation

Suppose we want to build an AGI. The only working human-level general intelligence we know of is the human brain. So if the goal is to guarantee the creation of a successful AGI, then we should study the human brain.

For ethical and practical reasons we do not want to copy the brain exactly. Rather, the goal is to understand the brain well enough, so that we can construct an intelligence with analogous capabilities. This differs from computational neuroscience, where the goal of simulations is to better understand biological systems, especially to answer isolated, more narrow scoped research questions; or neuromorphic computing, where performance supercedes biological accuracy.

A full brain simulation, down to the molecular level guarantees success but is impossibly expensive computationally. And even so the brain would still be a black box to us so this is undesirable. Instead we look for effective models that at least exhibit the same qualitative behavior as the brain. For example we may limit ourselves to modelling at the spike train level and ignore details such as gene upregulation. (The level of detail needed is still unknown.) To reproduce the same qualitative behavior, a simulated mouse brain should be capable of the same things a real mouse can do: eg not just motor skills and vision in isolation, but complicated tasks like foraging and hunting, navigation, and learning.

Outline

Our research program is as follows. Much like synthetic biology, we shall follow the philosophy, “What I cannot create, I do not understand.” At each stage, we will test our effective model by testing its performance in some environment, and compare that with the performance of a test animal in such an environment. Eg we can compare the amount of time needed for an animal to learn to walk, behavior when placed in a new location, etc. If our effective model has significantly inferior performance, then we must have missed an important element when simplifying the true biological model, and we must go back, identify it, and incorporate it. In a sense we, the experimenter, are performing artificial selection to search for the model architecture. Instead of optimizing for some kind of fitness as in natural evolution, we create a list of tasks which our model must perform.

If successful, we then demand more sophisticated behavior from the model, moving on to a more complex model organism. Eg when we move from a mouse model to a chimpanzee model, we add the requirement of tool use.

Note that the requirements themselves may also be analogous instead of exact. Eg we may choose to simplify or alter the anatomy of the simulated organism, or to simplify the hunting process.

Advantages of this approach

Biological intelligences significantly outperform traditional machine learning in out of distribution performance, few-shot learning, robustness from hallucination, no catastrophic forgetting, etc. A particularly interesting research question would be to study the role of instincts and reflexes, which are built-in in the animal brain and not learned. Introducing such induction biases may be essential for increasing sample efficiency when learning motor tasks. Another research question is to understand how biological intelligence performs executive functions. The brain has memory and a persistent world model, so that it normally does not hallucinate like an LLM does. The brain may also possess a special architecture that grants it superior generalization capabilities.

Guaranteed progress is very alluring, unlike other approaches to AGI like scaling LLMs or introducing ad hoc changes to the architecture. And since we’re always testing our effective model on real world tasks, we are not stuck in the intellectual quagmire of understanding the brain completely—we only need to understand the important aspects. These tests will evaluate our level of understanding, at least as far as engineering intelligent systems is concerned. (Eg as a completely made up example, our approach may tell us nothing about how bipolar neurons behave in real life; only that we need 30% of our neurons to be bipolar to give us the right connectivity for a functioning neural network. The first question falls under traditional neuroscience.)

The fact that we are scaling up intelligence means that even before reaching human level intelligence, we have obtained viable, robust general intelligence, suitable for applications. A dog level intelligence can navigate complex terrain, infer human intentions, avoid danger, manipulate objects, etc. Although modifications are likely needed to ensure compliance or to enhance capabilities before deployment.

Remarks

This would be a long term, resource intensive research program. I am currently going through the literature to see if this vision is already being pursued, eg perhaps Nengo. I do not have formal training in neuroscience, so feedback is appreciated.

Edits

Current state of C. elegans whole brain emulation