[Linkpost] The neuroconnectionist research programme
This is a linkpost for https://www.nature.com/articles/s41583-023-00705-w (open access preprint: https://arxiv.org/abs/2209.03718)
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call ‘neuroconnectionism’. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
Personally, I’d be excited to see more people thinking about the intersection of neuroconnectionism and alignment (currently, this seems very neglected).
Some examples of potential areas of investigation (alignment-relevant human capacities) which also seem very neglected could include: instruction following, moral reasoning, moral emotions (e.g. compassion, empathy).
- What new psychology research could best promote AI safety & alignment research? by 13 Jul 2023 16:30 UTC; 29 points) (EA Forum;
- Inducing human-like biases in moral reasoning LMs by 20 Feb 2024 16:28 UTC; 23 points) (
- Shouldn’t we ‘Just’ Superimitate Low-Res Uploads? by 3 Nov 2023 7:42 UTC; 15 points) (
- 30 Dec 2023 8:54 UTC; 4 points) 's comment on The Plan − 2023 Version by (
- 12 Jun 2023 22:24 UTC; 1 point) 's comment on [Linkpost] Large Language Models Converge on Brain-Like Word Representations by (
I’m working on a lecture on brain-like AGI with an emphasis on connectomics intended for the online AI alignment conference in early August.