Memory in the microtubules
A recent article in PloS Computational Biology suggests that memory is encoded in the microtubules. “Signaling and encoding in MTs and other cytoskeletal structures offer rapid, robust solid-state information processing which may reflect a general code for MT-based memory and information processing within neurons and other eukaryotic cells.”
They argue that synaptic connections are transient compared with the lifetime of memories, and therefore memories cannot be stored in them, but in some more persistent structure. The structure they suggest is the phosphorylation state of sites on microtubule lattices within neurons. And that’s about as much of the technical detail as I feel able to summarise. It’s not all speculation, they report technical work on the structures of these cellular components. Total memory capacity would be somewhere upwards of 10^20 bits (or in more everyday units, 10 million terabytes), depending on the encoding, of which they suggest several schemes.
Journalistic writeup here.
Note that Stuart Hameroff, one of the authors, is known for his proposals for microtubules as the mechanism of consciousness through quantum effects (and with Penrose, quantum gravitational effects). The present paper, however, is solely about memory and does not touch on quantum coherence or consciousness.
- 23 Mar 2012 22:53 UTC; 0 points) 's comment on Nonmindkilling open questions by (
Anyone can give an informed opinion on what this means for cryonics, if true? A common-sense guess is that it would make cryonics less likely to work as the preservation of information would depend on finer structures but the magnitude of that effect depends on just how vulnerable ‘the phosphorylation state of sites on microtubule lattices’ is to vitrification or freezing damage. Is there a neuroscientist in the house?
You forgot to add the key phrase, “given the way cryonics organizations currently perform their suspensions.” Every time you see some assertion about cryonics’ alleged unworkability, you need to add that disclaimer. Otherwise you irrationally shut off the possibility of other approaches to the problems the cryonics project attempts to solve.
In the Hebbian model of learning, you explain the formation of associations (e.g. between “the house on the corner” and “annoying barking dog”) by saying that the dispositions of individual neurons change. One set of neurons represents the house on the corner, another set of neurons represents the annoying barking dog, and after having both sets of neurons activated together often enough, activating just the house-on-the-corner neurons will all by itself activate the barking-dog neurons as well. So as you approach the house, you remember the dog, and cross the road rather than have it bark at you.
Before you acquire this association, the barking-dog neurons fire (the “barking-dog representation is activated”) only by directly seeing and hearing the barking dog. We may say that this is a genetically hardwired tendency—the nervous system develops even before birth so that those sensory pathways exist. But for the barking-dog neurons to fire just because the house-on-the-corner neurons are firing, they have to acquire new dispositions. They must have a tendency to “notice” which other neurons fire when they are firing, and then to start firing when those peer neurons fire in future, even in the absence of direct sensory stimulation.
If something like this happens, it’s called “long-term potentiation” (LTP). The co-stimulation of house neurons and dog neurons, so that repeatedly they were firing at the same time, physically changes the dog neurons so that they are sensitized to the firing of the house neurons. We know that this sort of potentiation occurs, and we know that it is the right sort of mechanism to make Hebbian learning possible. We even know that the enzyme CaMKII, mentioned in the paper, has something to do with LTP—if the enzyme is disabled, LTP is impaired. But we don’t know exactly what change in the vicinity of the synapse is responsible for the change in its behavior. It might be that CaMKII sits around, or it might be that CaMKII causes a change in something else. So the hypothesis in this paper takes the second route: CaMKII changes the structure or state of dendritic microtubules, and that’s the long-term change which makes the synapse respond differently.
I can’t judge the plausibility of this as a proposition about causal interactions between molecules in the cell. The detailed physical trajectories of molecules within cells—e.g. the exact circumstances under which they encounter each other—is still largely unknown. Biologists test ideas like this, not by running detailed mechanistic simulations, but by trying to interfere with critical steps in the proposed chain of causation, which may be understood only qualitatively. Thus you might try to mutate the CaMKII gene so as to impair the alleged interaction with the microtubule, but without impairing its other functions. Or you might try to make some other enzyme which interacts with the microtubule in the same way, to see if you could get this other molecule performing the same alleged function.
What I will say: I think they go way overboard in talking about the microtubule as if it could have a kilobyte memory capacity. The microtubule is a polymer with thousands of subunits. An individual CaMKII molecule might interact with a local neighborhood of these subunits—a hexagonal cluster of them. But that doesn’t mean that each hexagonal neighborhood on the microtubule is an independently addressable memory! The chain of causation is: neurotransmitter arrives at synapse, receptors open gates, Ca2+ enters the cell body, CaMKII is activated, and then it goes somewhere and does something. Maybe it does go and interact with a microtubule. But even supposing that it produced a persistent local change in the microtubule, how could that change of state possibly have a similarly fine-grained effect on the future behavior of the whole synapse?
So if this idea has anything to it at all, I think one has to imagine it working in bulk at the molecular level: large numbers of CaMKII molecules arriving at numerous locations on the surfaces of adjacent microtubules, and producing an averaged change—in the percentage of surface sites that have been phosphorylated, in the electronic band structure of the microtubule—some sort of global property of the individual microtubule, that has a chance of reacting back on the global tendencies of the synapse as a whole. I’m sure the authors are open to the possibility that it works this way; so what I’m saying is that their diagrams of “logic gates” at individual interaction sites are almost certainly bogus.
The same goes for the headline claim to have discovered the molecular code of memory, comparable to DNA. That would be like being presented with a silicon chip, and claiming that you understood how it stored and retrieved data, because you had figured out the quantum mechanics of charge transfer. My point is that even if the depicted interactions are real, they will not each be individually conveying a bit of retrievable information, because there’s no reason at all to think that such local modifications, once made, are systematically re-addressable (in contrast to DNA sequence, where a change at a single site has definite consequences). If they are real, such individual interactions will be part of a bulk process, with (say) thousands of such interactions adding up to an overall state change that matters.
Now personally I am quite keen on the idea that “conscious bits of information”, at least, correspond to discrete changes in the ultimate quantum state of something. This is to avoid the dualism of coarse-grained computationalism, whereby we have, on the one hand, the exact microphysical details of what goes on, and then on the other hand, a finite state machine which physically is just a very low-resolution description of exact reality, but whose states are nonetheless supposed to be an exact description of conscious reality. But even though, in the absence of biophysical evidence, I have a strong bias in favor of that idea, I don’t see it working like this, because these interactions are still too localized.
My paradigm for a simple global quantum state would be a nonabelian phase factor. In a typical quantum state, you can multiply by a factor of exp(i.theta) and it makes no internal difference. That’s an abelian phase factor because such quantities form an abelian group. In topological quantum computation, you get more complicated global phase factors (e.g. arrays of complex numbers) which encode topological information. So the overall wavefunction can be factorized into “global topological phase factor” x “entangled local variables”. If you could build the quantum mind out of tensored topological phase factors arising from wavefunctions of individual microtubules, maybe you could avoid the dualism of coarse-graining—the phase factors don’t arise by coarse-graining, they are exactly algebraically separable from the local details.
That’s all completely speculative, but I’m just providing the details of why, even though I’m very interested in microtubule theories of mind, I wouldn’t believe in these alleged interactions as the finest grain of memory—conscious memory, at least: they’re local but not addressable, which means they can only be meaningful in bulk.
From a more conventional neuroscientific perspective, I would think that the hypothesis in this paper looks highly speculative; mildly interesting at best, possibly in contradiction with some technical fact, without much empirical motivation, and not very meaningful until it can be brought into contact with some model from cognitive molecular neuroscience and tested.
Any neuroscientists care to weigh in? I’m interested in neuroscience on an amateurish level, and my priors on this are minuscule. I’ll also admit to a certain moderately irrational bias towards the quantum consciousness crowd. I have little respect for their objectivity and reason, since they appear, to me, to be pushing scientific ideas almost entirely because they are comforting.
Also, the listed argument doesn’t sit right with me. Synaptic connections are impermanent, but there’s strong clinical and experimental evidence that memories are explicitly rewritten into new synapses every time they are remembered. This provides an explanation for why they are so fragile and unreliable. I’m not at all sure that the problem they’re claiming to solve is actually a problem.
This paper consists of some vague simulations, followed by wild speculation. I’m pretty sure it’s bunk (speaking as a computational/cell biologist). It would be pretty easy to test, as well, as disrupting microtubules AT ALL would completely destroy memories if he is correct.
This paper is pure speculation aimed at supporting a conclusion that was made long ago based on less than trustworthy reasoning. Also, it is in a pay-to-publish journal with very weak peer-review standards, which does not add much to its perceived importance...
This paper is full of pretty pictures, but its thesis appears to be pretty nutty.
Here’s a report of a more sensible study on the molecular basis of human memory: Prion Leaves Lasting Mark On Memory.
Thanks for the heads up and the link :-)