No software/hardware separation in the brain: empirical evidence
I feel like the evidence in this section isn’t strong enough to support the conclusion. Neuroscience is like nutrition—no one agrees on anything, and you can find real people with real degrees and reputations supporting just about any view. Especially if it’s something as non-committal as “this mechanism could maybe matter”. Does that really invalidate the neuron doctrine? Maybe if you don’t simulate ATP, the only thing that changes is that you have gotten rid of an error source. Maybe it changes some isolated neuron firings, but the brain has enough redundancy that it basically computes the same functions.
Or even if it does have a desirable computational function, maybe it’s easy to substitute with some additional code.
I feel like the required standard of evidence is to demonstrate that there’s a mechanism-not-captured-by-the-neuron-doctrine that plays a major computational role, not just any computational role. (Aren’t most people talking about neuroscience still basically assuming that this is not the case?)
We can expect natural selection to result in a web of contingencies between different levels of abstraction.[6]
Mhh yeah I think the plausibility argument has some merit.
Especially if it’s something as non-committal as “this mechanism could maybe matter”. Does that really invalidate the neuron doctrine?
I agree each of the “mechanisms that maybe matter” are tenuous by themselves, the argument I’m trying to make here is hits-based. There are so many mechanisms that maybe matter, the chances of one of them mattering in a relevant way is quite high.
Having considered lots of proposed mechanisms-that-matter over the course of a 23-year career in computational neuroscience, I still largely believe the neuron doctrine.
The amount of information in neurons and the rate it changes seems quite adequate to explain the information density and rate-of-change of conscious experience.
A rough simulation of brain function wouldn’t precisely reproduce our conscious experience, but I see no reason to believe it wouldn’t produce something very much like an average human experience.
Assuming that every last molecule is functionally important (rather than very aggregated effects) seems so unlikely as to be irrelevant.
Neuronal information transfer supports consciousness. Other mechanisms facilitate and regulate neuronal information transfer. Some transfer relatively small amounts of information.
All of this can be simulated in the types of computers we’d have by around 2050, even if humans are still the only ones making better computuers—which now seems unlikely.
No. I’m not sure anyone has bothered to write one. There were only occasional halfhearted and poorly supported attacks on the neuron doctrine. The neuron doctrine is just not really debated because it’s almost universally accepted. No reputable neuroscientist argued against it to any strong degree, just for additional supportive methods of information transmission.
The closest reasonable question about it was active spikes at dendritic junctions. These are probably important, but they are akin to adding some extra layers of simple neurons to the network. They’re using the same basic ionic gates to send signals. There’s no reason to think those need to be modeled in anything like molecular detail; their function is well understood at a macro level.
No reputable neuroscientist argued against it to any strong degree, just for additional supportive methods of information transmission.
I don’t think this is correct. This paper argues explicitly against the neuron doctrine (enough so that they’ve put it into the first two sentences of the abstract), is published in a prestigious journal, has far above average citation count, and as far as I can see, is written by several authors who are considered perfectly fine/serious academics. Not any huge names, but I think enough to clear the “reputable” bar.
I don’t think this is very strong evidence since I think you can find people with real degrees supporting all sorts of contradicting views. So I don’t think it really presents an issue for your position, just for how you’re phrased it here.
Neural field theory is different than the neuron doctrine. It accepts the neuron doctrine.
That abstract does not seem to be questioning the neuron doctrine but a particular way of thinking about neuronal populations. It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.
So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.
And they’re not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.
It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.
Really? Isn’t the shape of the brain something other than axons and dendrites?
The model used in the paper doesn’t take any information about neurons into account, it’s just based on a mesh of the geometry of the particular brain region.
So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.
And they’re not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.
The results (at least the flagship result) are about a coarse approximation, but the claim that anatomy restricts function still seems to me like contradicting the neuron doctrine.
Admittedly the neuron doctrine isn’t well-defined, and there are interpretations where there’s no contradiction. But shape in particular is a property that can’t be emulated by digital computers, so it’s a contradiction as far as the OP goes (if in fact the paper is onto something).
Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that (while saying their theory is miles off from a proposal to emulate the brain in depth).
And the paper very likely is on to something, but not quite what they’re talking about. fMRI analyses are notoriously noisy and speculative. Nobody talking about brain emulation talks about fMRI; it’s just too broad-scale to be helpful.
Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that
Would you bet on this claim? We could probably email James Pang to resolve a bet. (Edit: I put about 30% on Pang saying that it makes simulation easier, but not necessarily 70% on him saying it makes simulation harder, so I’d primarily be interested in a bet if “no idea” also counts as a win for me.)
I think your argument also has to establish that the cost of simulating any that happen to matter is also quite high.
My intuition is that capturing enough secondary mechanisms, in sufficient-but-abstracted detail that the simulated brain is behaviorally normal (e.g. a sim of me not-more-different than a very sleep-deprived me), is likely to be both feasible by your definition and sufficient for consciousness.
If I understand your point correctly, that’s what I try to establish here
the speed of propagation of ATP molecules (for example) is sensitive to a web of more physical factors like electromagnetic fields, ion channels, thermal fluctuations, etc. If we ignore all these contingencies, we lose causal closure again. If we include them, our mental software becomes even more complicated.
i.e., the cost becomes high because you need to keep including more and more elements of the dynamics.
Only if all of those complex interactions are necessary to capture consciousness at all, not just to precisely reproduce the dynamics of one particular consciousness.
If that were the case, the brain would be a highly unreliable mechanism. We’d lose consciousness when exposed to an external magnetic field, for instance, or when our electrolyte balance was off.
Most of the brain’s complexity contributes to maintaining information flow between neurons. The remainder is relatively cheap to simulate. See my other comment.
I feel like the evidence in this section isn’t strong enough to support the conclusion. Neuroscience is like nutrition—no one agrees on anything, and you can find real people with real degrees and reputations supporting just about any view. Especially if it’s something as non-committal as “this mechanism could maybe matter”. Does that really invalidate the neuron doctrine? Maybe if you don’t simulate ATP, the only thing that changes is that you have gotten rid of an error source. Maybe it changes some isolated neuron firings, but the brain has enough redundancy that it basically computes the same functions.
Or even if it does have a desirable computational function, maybe it’s easy to substitute with some additional code.
I feel like the required standard of evidence is to demonstrate that there’s a mechanism-not-captured-by-the-neuron-doctrine that plays a major computational role, not just any computational role. (Aren’t most people talking about neuroscience still basically assuming that this is not the case?)
Mhh yeah I think the plausibility argument has some merit.
I agree each of the “mechanisms that maybe matter” are tenuous by themselves, the argument I’m trying to make here is hits-based. There are so many mechanisms that maybe matter, the chances of one of them mattering in a relevant way is quite high.
Sure, if you look at them in the abstract.
Having considered lots of proposed mechanisms-that-matter over the course of a 23-year career in computational neuroscience, I still largely believe the neuron doctrine.
The amount of information in neurons and the rate it changes seems quite adequate to explain the information density and rate-of-change of conscious experience.
A rough simulation of brain function wouldn’t precisely reproduce our conscious experience, but I see no reason to believe it wouldn’t produce something very much like an average human experience.
Assuming that every last molecule is functionally important (rather than very aggregated effects) seems so unlikely as to be irrelevant.
Neuronal information transfer supports consciousness. Other mechanisms facilitate and regulate neuronal information transfer. Some transfer relatively small amounts of information.
All of this can be simulated in the types of computers we’d have by around 2050, even if humans are still the only ones making better computuers—which now seems unlikely.
Could you recommend any good (up-to-date) reading defending the neuron doctrine?
No. I’m not sure anyone has bothered to write one. There were only occasional halfhearted and poorly supported attacks on the neuron doctrine. The neuron doctrine is just not really debated because it’s almost universally accepted. No reputable neuroscientist argued against it to any strong degree, just for additional supportive methods of information transmission.
The closest reasonable question about it was active spikes at dendritic junctions. These are probably important, but they are akin to adding some extra layers of simple neurons to the network. They’re using the same basic ionic gates to send signals. There’s no reason to think those need to be modeled in anything like molecular detail; their function is well understood at a macro level.
I don’t think this is correct. This paper argues explicitly against the neuron doctrine (enough so that they’ve put it into the first two sentences of the abstract), is published in a prestigious journal, has far above average citation count, and as far as I can see, is written by several authors who are considered perfectly fine/serious academics. Not any huge names, but I think enough to clear the “reputable” bar.
I don’t think this is very strong evidence since I think you can find people with real degrees supporting all sorts of contradicting views. So I don’t think it really presents an issue for your position, just for how you’re phrased it here.
Neural field theory is different than the neuron doctrine. It accepts the neuron doctrine.
That abstract does not seem to be questioning the neuron doctrine but a particular way of thinking about neuronal populations. It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.
So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.
And they’re not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.
Really? Isn’t the shape of the brain something other than axons and dendrites?
The model used in the paper doesn’t take any information about neurons into account, it’s just based on a mesh of the geometry of the particular brain region.
The results (at least the flagship result) are about a coarse approximation, but the claim that anatomy restricts function still seems to me like contradicting the neuron doctrine.
Admittedly the neuron doctrine isn’t well-defined, and there are interpretations where there’s no contradiction. But shape in particular is a property that can’t be emulated by digital computers, so it’s a contradiction as far as the OP goes (if in fact the paper is onto something).
Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that (while saying their theory is miles off from a proposal to emulate the brain in depth).
And the paper very likely is on to something, but not quite what they’re talking about. fMRI analyses are notoriously noisy and speculative. Nobody talking about brain emulation talks about fMRI; it’s just too broad-scale to be helpful.
Would you bet on this claim? We could probably email James Pang to resolve a bet. (Edit: I put about 30% on Pang saying that it makes simulation easier, but not necessarily 70% on him saying it makes simulation harder, so I’d primarily be interested in a bet if “no idea” also counts as a win for me.)
I think your argument also has to establish that the cost of simulating any that happen to matter is also quite high.
My intuition is that capturing enough secondary mechanisms, in sufficient-but-abstracted detail that the simulated brain is behaviorally normal (e.g. a sim of me not-more-different than a very sleep-deprived me), is likely to be both feasible by your definition and sufficient for consciousness.
If I understand your point correctly, that’s what I try to establish here
i.e., the cost becomes high because you need to keep including more and more elements of the dynamics.
Only if all of those complex interactions are necessary to capture consciousness at all, not just to precisely reproduce the dynamics of one particular consciousness.
If that were the case, the brain would be a highly unreliable mechanism. We’d lose consciousness when exposed to an external magnetic field, for instance, or when our electrolyte balance was off.
Most of the brain’s complexity contributes to maintaining information flow between neurons. The remainder is relatively cheap to simulate. See my other comment.