Thermodynamics of Intelligence and Cognitive Enhancement
Introduction
Brain energy is often confused with motivation, but these are two distinct phenomena. Brain energy is the actual metabolic energy available to the neurons, in the form of adenosine triphosphate (ATP) molecules. ATP is the “energy currency” of the cell, and is produced primarily by oxidative metabolism of energy from food. High motivation increases the use of this energy, but in the absence of sufficient metabolic capacity it eventually results in stress, depression, and burnout as seen in manic depression. Most attempts at cognitive enhancement only address the motivation side of the equation.
The “smart drug” culture has generally been thinking pharmaceutically rather than biologically. Behind that pharmaceutical orientation there is sometimes the idea that the individual just isn’t trying hard enough, or doesn’t have quite the right genes to excel mentally.
-Ray Peat, PhD
Cellular Thermodynamics
Any simple major enhancement to human intelligence is a net evolutionary disadvantage.
-Eliezer Yudkowsky (Algernon’s Law)
I propose that this constrain is imposed by the energy cost of intelligence. The conventional textbook view of neurology suggests that much of the brain’s energy is “wasted” in overcoming the constant diffusion of ions across the membranes of neurons that aren’t actively in use. This is necessary to keep the neurons in a ‘ready state’ to fire when called upon.
Why haven’t we evolved some mechanism to control this massive waste of energy?
The Association-Induction hypothesis formulated by Gilbert Ling is an alternate view of cell function, which suggests a distinct functional role of energy within the cell. I won’t review it in detail here, but you can find an easy to understand and comprehensive introduction to this hypothesis in the book “Cells, Gels and the Engines of Life” by Gerald H. Pollack (amazon link). This idea has a long history with considerable experimental evidence, which is too extensive to review in this article.
The Association-Induction hypothesis states that ion exclusion in the cell is maintained by the structural ordering of water within the cytoplasm, by an interaction between the cytoskeletal proteins, water molecules, and ATP. Energy (in the form of ATP) is used to unfold proteins, presenting a regular pattern of surface charges to cell water. This orders the cell water into a ‘gel like’ phase which excludes specific ions, because their presence within the structure is energetically unfavorable. Other ions are selectively retained, because they are adsorbed to charged sites on protein surfaces. This structured state can be maintained with no additional energy. When a neuron fires, this organization collapses, which releases energy and performs work. The neuron uses significant energy only to restore this structured low entropy state, after the neuron fires.
This figure (borrowed from Gilbert Ling) summarizes this phenomena, showing a folded protein (on the left) and an unfolded protein creating a low entropy gel (on the right).
To summarize, maintaining the low entropy living state in a non-firing neuron requires little energy. This implies that the brain may already be very efficient, where nearly all energy is used to function, grow, and adapt rather than pump the same ions ‘uphill’ over and over.
Cost of Intelligence
To quote Eliezer Yudkowsky again, “the evolutionary reasons for this are so obvious as to be worth belaboring.” Mammalian brains may already be nearly as efficient as their physics and structure allows, and any increase in intelligence comes with a corresponding increase in energy demand. Brain energy consumption appears correlated with intelligence across different mammals, and humans have unusually high energy requirements due to our intelligence and brain size.
Therefore if an organism is going to compete while having a greater intelligence, it must be in a situation where this extra intelligence offers a competitive advantage. Once intelligence is adequate to meet the demands of survival in a given environment, extra intelligence merely imposes unnecessary nutritional requirements.
These thermodynamic realities of intelligence lead to the following corollary to Algernon’s Law:
Any increase in intelligence implies a corresponding increase in brain energy consumption.
Potential Implications
What is called genius is the abundance of life and health.
-Henry David Thoreau
This idea can be applied to both evaluate nootropics, and to understand and treat cognitive problems. It’s unlikely that any drug will increase intelligence without adverse effects, unless it also acts to increase energy availability in the brain. From this perspective, we can categorically exclude any nootropic approaches which fail to increase oxidative metabolism in the brain.
This idea shifts the search for nootropics from neurotransmitter like drugs that improve focus and motivation, to those compounds which regulate and support oxidative metabolism such as glucose, thyroid hormones, some steroid hormones, cholesterol, oxygen, carbon dioxide, and enzyme cofactors.
Why haven’t we already found that these substances increase intelligence?
Deficiencies in all of these substances do reduce intelligence. Further raising brain metabolism above normal healthy levels should be expected to be a complex problem because of the interrelation between the molecules required to support metabolism:
If you increase oxidative metabolism, the demand for all raw materials of metabolism is correspondingly increased. Any single deficiency poses a bottleneck, and may result in the opposite of the intended result.
So this suggests a ‘systems biology’ approach to cognitive enhancement. It’s necessary to consider how metabolism is regulated, and what substrates it requires. To raise intelligence in a safe and effective way, all of these substrates must have increased availability to the neuron, in appropriate ratios.
I am always leery of drawing analogies between brains and computers but this approach to cognitive enhancement is very loosely analogous to over-clocking a CPU. Over-clocking requires raising both the clock rate, and the energy availability (voltage). In the case of the brain, the effective ‘clock rate’ is controlled by hormones (primarily triiodothyronine aka T3), and energy availability is provided by glucose and other nutrients.
It’s not clear if merely raising brain metabolism in this way will actually result in a corresponding increase in intelligence, however I think it’s unlikely that the opposite is possible (increasing intelligence without raising brain metabolism).
I was going to comment upon the comment you made in another post about Gilbert Ling. I’d never heard of them or their ideas and had to go looking the other night. As a result I now know why I never did.
I am extremely tired right now and in the middle of preparing for my thesis committee meeting, so I cannot give this the attention it deserves right now. Come back in a day or three and I will either expand this reply or make another one.
For now:
*From what I’ve seen, Ling’s ideas seem to originally be based upon a few equivocal experiments from the sixties and seventies that have since been contradicted by just about all cellular electrophysiology, enzymology, and membrane biology known.
*All the odd results they point to about ion balance requiring too much energy to be accounted for via active-transport ion pumps in the membrane have been solved to my satisfaction by more recent work, especially in neurons, and all their talk about cells with compromised membranes maintaining ion balance seem like they can be explained by work on calcium-triggered membrane vesicle fusion from the nineties.
*This proposed role for ATP in unwinding proteins is experimentally unsupported by any good sources I can find, I fail to understand why it would be better for this purpose than just phosphate laying around the cell, and I use ATP nearly literally nearly every day to drive enzymes via hydrolysis. The ability of enzymes and channel proteins to couple its hydrolysis to all kinds of processes is not and never has been in doubt.
*Ion gradients established via membrane pumps are well-established as an energy source (or sink) in biology from the classic rotary ATPase that pumps protons across the bacterial membrane while breaking down ATP (with the proton gradient then driving flagellar motion in bacteria as it leaks back in through another rotary setup attached to the fiber and various transport systems), to the role of calcium as a signaling molecule that is sequestered out of the cytosol by active ion-specific pumps that can be poisoned, to the existence of channel proteins in our blood that form part of our innate immune system which exert their killing effect on bacteria by forming large nonspecific channels in their membranes that allow ions to leak across their gradient.
*Most of their writing that I did see in my hour of poking around contains incredulity about the very idea of powered channels being able to do what we know, experimentally, they do, along with some very elementary misunderstandings of molecular biology and evolution.
*The evidence for ordered water more extensive than hydration shells around proteins is not very good, and there is actually very interesting and productive research going on into protein hydration—I interviewed with and almost worked for a guy at Indiana University starting a project comparing the hydration shells around bacterial and eukaryotic proteins and finding that bacterial proteins had nice orderly hydration shells you can see in crystal structures while eukaryotic proteins are ‘sticky’ to other protein molecules because they are unable to hold onto water molecules well in the face of themal noise. You pointed to the work of Gerald Pollack supporting the idea of extensive ordered water and while this is a guy doing interesting materials science about the interaction of water and solutes with surfaces he is prone to making wild pronouncements on extremely small amounts of evidence and others have raised other possible explanations for his data that have been insufficiently followed up on.
There is a place for controversy in biology. At the university I am working at right now, I nearly worked with a guy who is busily basically trying to prove that the model of protein translocation across the ER membrane is wrong and far more complicated than the usual address-tag plus protein-channel for growing peptides setup in the textbooks. He even thinks that parts of it were crystallized in the public opinion of scientists before sufficient work had gone into proving it. But at a glance shoving out literally most of what is known pretty well about the way cellular metabolism and membranes work… color me sceptical to say the least.
If you have time to provide links, I would like to read the research you mention- especially on the thermodynamics of ATP and calcium-triggered membrane vesicle fusion. Ling’s work is all very old and doesn’t address any newer research, but Pollack addresses some of the issues you raise in his books. Pollack does love to speculate a lot, but he appears to be careful in distinguishing this speculation from things that have more evidence. Here is also a newer review paper that discusses this and some other ideas related to the role of entropy in biochemistry: Coherent Behavior and the Bound State of Water and K+ Imply Another Model of Bioenergetics: Negative Entropy Instead of High-energy Bonds
I’m gonna split up this reply, since I think part of it is important enough to be seen more and will go into a higher-level reply to the post itself. I will also preface this by saying that my primary areas of expertise are in energy metabolism (mostly glycolysis) and the cell division cycle, along with all the basic enzymology you need to know to do molecular biology.
As for ATP thermodynamics, I looked deeper into Ling’s writings before replying and was more and more distressed by what I saw. They literally cannot correctly do thermodynamics and biochemistry that I learned in my senior year of high school. The end result of several extremely basic math and conceptual errors in their justification for their theories is that their calculated value for the free energy available from cellular metabolism to pump sodium and potassium across the membrane is approximately 1⁄12 the true value! Given that this is approximately the figure they give for the factor of insufficiency of cellular metabolism to provide enough energy to run the pumps (they give a factor of 15-30 [without stating the error bars] ), and that I have other reasons to distrust almost everything this person has ever done, it is safe to say that Ling’s objection to the sodium/potassium pump on thermodynamic grounds is quite simply unjustified.
I will be walking through this in more detail in my other top-level reply along with other reasons that you shouldn’t accept their work at face value.
I will only say more about the thermodynamics of the sodium potassium pump by pointing to a paper that I found after a few moments of google scholar searching indicating that hepatocytes under oxygen starvation conditions not dissimilar to those described in Ling’s experiments put about 75% of their cellular energy into maintaining the ion gradient, but that under normal circumstances they use a much more reasonable less than one quarter. This is not an unexplored area of research and insinuating that there is some sort of controversy here is simply false.
I will summarize the research on membrane formation and calcium-mediated membrane vesicle fusion before linking to a paper that you can see figures of without a paywall.
Membrane lipids and the contents of membrane-bound compartments move around between compartments via vesicles. Proteins are extruded into the interior of the endoplasmic reticulum compartment before being packaged through a series of other membrane-bound organelles before being secreted, and all the membrane-lipid-building enzymes are for the most part embedded in the ER membrane so the lipids have to get from there to all the other membranes somehow. This turns out to be done via extremely tiny submicroscopic vesicles. They are rather smaller than the wavelength of light because they are pinched off their parent membranes by molecular machines consisting of single-digit numbers of protein molecules and thus are invisible to a light microscope but you can see them with an electron microscope and in some places, like the growing tips of fungal threads, they are densely packed enough that they make the cytoplasm milky.
These vesicles are attached to their destination membranes by a complex of proteins called SNARES which require calcium to work. V-SNARES on the vesicles bind to T-SNARES on the destination membranes and as the complex forms they warp the membranes and cause them to fuse. I’ve seen very interesting molecular simulations from the Folding@Home project of this process, which if I recall correctly indicated that as the membranes get warped and pushed together by the binding together of the SNARES, the hydration shells of the hydrophilic head groups of the lipids clash and produce interesting ordered structures that suddenly exclude all solvent from between the two membranes, causing the membranes to rapidly fuse.
Calcium is ordinarily excluded from the cytoplasm almost entirely, being sequestered into the ER compartment and the extracellular fluid. As a result SNARE function is normally very slow, except near the surface of membranes that bear calcium channels. In neurons, calcium rushes in for a miniscule fraction of a second after the neuron fires and this is what mediates the fusion of neurotransmitter-carrying vesicles to the presynaptic membrane allowing the signal to be passed to another neuron.
Cells DO get rips and tears in their membranes but often manage to repair them before losing undue cell contents. The main research on how this happens was done in starfish and echinoderm egg cells and embryos because they are cheap ways of getting lots of cytoplasm. A representative paper can be seen here: “Large Plasma Membrane Disruptions Are Rapidly Resealed by Ca 2+ dependent Vesicle–Vesicle Fusion Events”.
Figure 1: something like an eigth of the membrane of a starfish egg is torn off and while there is an initial puff of cytoplasm that squirts out, a new membrane forms behind it and retains the cell contents within seconds.
Figure 2: same thing but in an egg that had been injected with a substance that only glows in the presence of calcium ions. Upon membrane tearing the calcium RUSHES in rapidly, and the area of very high concentration is where the new membrane forms. The remaining calcium that makes it past the new barrier then diffuses throughout the rest of the cell rather than being excluded from some kind of water matrix.
Figure 5: injecting fluorescent dye without calcium into a starfish egg lets the dye immediately diffuse throughout the cell, while injecting it with calcium ions causes a vesicle to form around the dye containing it and preventing it from getting into the cytosol.
Figure 7: the cytoplasm around a vesicle formed like in figure 5 is full of membrane vesicles of odd shapes and sizes.
Figure 9: starfish egg cytoplasm dripped out of a needle into non-calcium-containing media loses a fluorescent dye in it to the media, while when dripped into calcium-containing media it forms a membrane and holds it in.
Figure 10: cytoplasm centrifuged so that it no longer contains membrane-bound vesicles is unable to form a barrier in response to calcium while the centrifuged down membranous organelles and vesicles are able to.
Figure 11: a diagram of the proposed mechanism.
It appears that when the horrifically abnormally high levels of calcium that appear when a membrane is cut hit the small membrane vesicles present in the cytoplasm, they rapidly indiscriminately fuse until they manage to create a new membrane barrier from themselves and any other membranes they touch that holds the cytoplasm in and restores ion sequestration. Other papers both before and after this saw the resealing of membranes in normal body cells but were unable to closely examine it, the large egg cells made it possible to do all these interesting manipulations.
This immediately suggests an explanation for Ling’s experiment in which they sliced frog muscle cells in half and put the cut ends in an ion solution (“Ringer’s solution”) which they measured the ion flux in and out of and saw it was normal. I note that Ringer’s solution contains large amounts of calcium. They claim that they checked via electron microscopy that the cytoplasm did not reseal, but the insides of muscle fibers are horrifically dense complicated places and there’s no guarantee that it resealed RIGHT at the cut site – it could have resealed microns or even millimeters away.
Excellent post, thanks for putting so much work into a clear explanation. I will re-investigate Ling’s work more carefully, and also see if I can find the mistakes in his thermodynamics calculations you mention. I have been biased towards his work and not looking critically enough, because it seems to explain some surprising observations about drug activity I’ve found in my own research- but that’s no excuse.
I am interested in the possibility that Ling could be entirely wrong about membrane physiology, but this gel phase shift phenomena could still be important in the cell. If Ling and Pollack are wrong about long distance effects from protein surfaces, that might not destroy their arguments as the cytosol is very dense, and the distance between proteins is very short. Albert Szent-Györgyi also did some work on this idea that is very different from Ling’s.
One of my committee members works on physics simulations of protein hydration shells, and I am going to meet with him and see what he thinks about this. The simulations I have seen don’t show significant water structuring, as the water molecules have too much thermal energy.
I promise I’ll get my top level post made soon—I just finished my committee meeting a few hours ago.
The short and dirty version is that Ling seems to completely ignore the entropy contribution to the Gibbs free energy change associated with ATP hydrolysis and throws out about 3⁄4 of the enthalpy contribution on the grounds that it is the energy of solvation of the protons that come off the newly deprotonated middle phosphate rather than the potential energy of the phosphate-phosphate bond itself, when that simply doesn’t matter and you just can’t do that when considering equilibrium and reaction rates and the ability of one reaction coupling to another to drive it. It’s not as if that one bond alone charges up a battery or something, the whole reaction occurs.
I honestly don’t know what to make of the assertion that ATP unwinds proteins just by complexing with unwound backbone. I’ve never seen that claim anywhere else, and I use ATP all the time via standard active-site hydrolysis reactions to drive DNA-building and DNA-modifying reactions as I make the DNA I put into my modified cells.
The simulation I was speaking of about the SNARES was indicating small hydration shells just a molecule or two wide, not large ones. It was interesting though in that it found that when the two membranes were forced into odd geometries and very close proximity by the SNARES forming a tight ring, the hydration shells were forced together to form ordered structures just a few molecules wide between the membranes before suddenly emptying the space. It’s been a while since I’ve seen that paper though, and I’d encourage you to look at the folding@home website and find it if you are curious and you don’t trust my memory (which I do not entirely trust myself, that’s not exactly my field and it’s been a few years).
Odd drug metabolism stuff eh? Want to move that to a PM?
I think this post illustrates the kind of thinking that made me hate molecular biology. I started studying bioinformatics with the plan of afterwards making a master in neuroinformatics and do congitive enhancement. I used to believe.
You basically think that there something called intelligence with has a clear definition which you don’t have to establish. Then you say that evolution tried to maximize that intelligence and think that the thing that contrains intelligence obviously falls in your domain of molecular biology and has to be ATP.
A while ago I tried to get a performance metric for my brain functioning and did it through a bunch of reaction tests. At the beginning I called it intelligence. I talk with a psychology phd and he told me if I wanted to speak to an academic audience I shouldn’t use the word intelligence but rather speak about cognitive performance.
Ten million years ago our ancestors made decisions very differently than todays humans do. Today’s humans use very different heustrics to make decisions and there no good reason to assume that we spent a lot of time from an evolutionary perspective to optimize our brains to make decisions based on those heuristics.
Those heuristics allowed us to build tools and then everything went very fast and today we have brains that do a lot of amazing tasks for which they are not optimized. We outcompeted Neandertalers that have bigger brains than us.
What practical evidence do I have that this is the case? Take memory. On example from my physiology lecture was someone who nearly gets hit by a car while traveling to work. That produces information storage. Even a year latter when the person has consciously forgotten the event the amygdala will still fire when the person gets in the same physical location and blood pulse will rise.
On the other hand if I hear a name of another person once, I will often forget the name a year latter. Our brains are not optimized for storing that information. Wanting to store the information does nothing at all but it would be very useful if we had something like a switch to mark information that should be remembered and other information that’s irrelevant.
In physiology lecture we learned that the brain always uses the same amount of energy regardles if we are “mentally active” or relax. On the other hand the intestines use different amounts of energy when they are actively and do hard work and if they are not. Really everything expect the brain uses a different amount of energy when it’s in hard work modus.
The funny thing about using therodynamics in the title of the post is that you just might use the term entropy at the wrong level of abstraction. Having a constant well defined stream of information brings entropy in a informational system. Given that the brain regulates to constant energy consumption, constant amount of glucose input might be more important than it’s quantity for intelligence.
But let’s go back to hard mental work. At the beginning I couldn’t believe that the brain uses the same amount of energy. It’s a consensus belief if you look at biology stackexchange.
One of the core reason why I didn’t believe was mnemonics. Memorizing a deck of card in half an hour was for me a very challenging mental activity. It made me breath faster. It was kind of obvious that my brain needed more energy for doing hard mental work and I therefore breathed faster. But my physiology professor wasn’t convinced. He told me that things aren’t as straightforward.
My problem was that doing mnemonics triggered a stress response. That stress response was completely unnecessary. The brain doesn’t need more energy. On the other hand the stress does reduce cogntiive function and made practicing mnemonics really uncomfortable. At the time I didn’t know enough about emotions to tackle the problem at it’s core, remove the unncessary stress response and focus on learning mnemonics. In the end I let it go, because memorizing decks of cards isn’t really a useful lifeskill anyway.
The same goes for other delibrate practice of mental skills. Humans can’t simply do delibrate practice for 8 hours per day, because a bunch of emotional crap triggers and cost resources.
3 million years ago monkey’s wouldn’t want to practice memorizing decks of cards for 3 hours in a row or proving math theorems for hours on end. Our emotional system is not made for that.
It’s rather made for preparing for running after the animal that we are hunting when we use our mental prowness to read it’s tracks.
Humans are also doing other strange things when it comes to emotions. Nearly every animal has sex at times of the year when it makes sense to have sex. In the mating session. We humans don’t interact with our emotions that way. We have simply sex all the time, because having sex is fun. Just like our vitamin C production, something broke about making smart mating choices. Evolution is chaos. All that mental stuff distracts us from reacting to our emotions in a way that we do substantially different things in May then in September. The price of having heuristics that are useful for tool making was worth it.
Too many people are still Darwinists in the sense that believe all evolution is about natural selection. They don’t understand the huge effects of genetic drift and other ways that process break down and really aren’t optimized.
The pickup community who thinks that it’s natural that a male always wants sex has got it very wrong. Most species don’t behave that way. It’s more of an accident. Things aren’t working as they should, because man suddenly started to use his brain differently than most other species.
If we want to succeed with cognitive enhancement, the solutions won’t be at the molecular biology level or seen in fMRI images. We need to understand a lot about how humans process information that’s not on those levels. The important bottlenecks are not on that level.
That’s a good point about intelligence, the way I used that word without defining it in this article is sloppy.
I am interested in the ability to solve important problems. Maybe instead I should talk about something more easily definable such as mental endurance, or limiting the stress response from focused work? Personally, I think if I could work longer in one sitting on a hard problem without stress or fatigue, that alone would count as “increased intelligence” for practical purposes.
I think there are links between the stress response and nutrient availability. In lab mice anyway, sugar seems to reduce stress hormone production during stressful situations. However in practice this might be harmful to doing focused work, if stress improves focus.
Thanks for pointing out the issue of brain energy consumption vs mental activity. I think this entire article hinges on the (unfounded?) assumption that the two are strongly correlated. I am confused about this, and need to learn more about it. I see many articles and researchers claim massive increases in energy consumption with hard mental activity, and others that claim there is none which seems very strange. How are they measuring this? I wonder if people under general anesthesia have much lower, or about the same energy requirements as an awake person?
Do you have an argument for why we should have had evolutionary pressure for solving the kind of issues that we today consider to be important problems?
When it comes like a mental task such as memorizing a deck of card there are simply massive improvement when one uses mnemonics and trains then when one doesn’t. To me it doesn’t seem like there’s a good reason to think that the same isn’t true with working on important problems.
I agree. The interesting thing on that question is that believing whether or not willpower is limited seems to have an effect: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0038680
Going around and arguing that it’s a fixed resource you might effectively reduce willpower. The quest for a molecular biological framing of the problem might be hurting people’s ability to solve important problems because it gives them the wrong beliefs and those beliefs matter for their performance.
Maybe the emporer has no clothes and we should just stop the project and insteadly focus on programming the right beliefs into people. I know I’m moving into dangerous waters if I say those things on Lesswrong ;).
The more I think about concepts like stress the more new questions popup. There are things happens in my body for which I developed qualia through Danis Bois perceptive padagogy where unfortunately the main body of written work is in French.
I have seen that there are interesting things to be done with hypnosis when it comes to emotional management but the resulting literature is also not straightforward.
I probably need good test subjects and further time thinking about detail and improving my own perception. Maybe find a way to calibrate my percerption.
Are we talking about production in the sense of making new hormones or are we talking about secreting already existing hormones? Do you know the time frames?
Unfortunately I have to confess I don’t know. I think at the time I took the relevant lessons I was too shy to really press for evidence. It’s says something about biology science eduction. There are so many claims about how things happen to be but in textbooks and lectures there not that much emphasis on how we know them.
The massive variation in human intelligence and the positive correlation between IQ and pretty much everything good implies that “Any simple major enhancement to human intelligence is a net evolutionary disadvantage” isn’t true
Also, it’s possible that humans were quickly evolving towards being more intelligent when they got interrupted by the invention of civilization and there’s still more low-hanging fruit to be picked.
And the mechanism by which civilization interrupts the evolution of intelligence is?
By changing selection pressures and by not having been around for very long.
That sounds plausible. Of course it also sounds plausible as an explanation for rapidly increasing the evolution of intelligence.
This seems compatible with Stephen Hsu’s models.
But then if there is the “positive correlation between IQ and pretty much everything good”, you do start to wonder why there is “massive variation in human intelligence”...
Not to mention the inverse correlation between IQ and number of offspring, although humans tend to use forms of group selection, where a smarter minority whose innovations help the community as a whole appears to be more sustainable than a smarter majority.
wiki
Does it make sense to control for education and socioeconomic status when measuring the effect of intelligence?
Not in this case. This is a good example of how you can go wrong by overcontrolling (or maybe we should chalk this up as an example of how correlations!=causations?)
Suppose the causal model of Genes->Intelligence->Education->Less-Reproduction is true (and there are no other relationships). Then if we regress on Less-Reproduction and include Intelligence & Education as predictors, we discover that after controlling for Education, Intelligence adds no predictive value & explains no variance & is uncorrelated with Less-Reproduction. Sure, of course: all Intelligence is good for is predicting Education, but we already know each individual’s Education. This is an interesting and valid result worth further research in our hypothetical world.
Does this mean dysgenics will be false, since the coefficient of Intelligence is estimated at ~0 by our little regression formula? Nope! We can get dysgenics easily: people with high levels of Genes will cause high levels of Intelligence, which will cause high levels of Education, which will cause high levels of Less-Reproduction, which means that their genes will be be selected against and the next generation start with lower Genes. Even though it’s all Education’s fault for causing Less-Reproduction, it’s still going to hammer the Genes.
I don’t know if this sort of problem has a widely-known name (IlyaShpitser might know one); I’ve seen it described in some papers but without a specific term attached, for example, “Let’s Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong”:
Epidemiology seems to call this “overadjustment”; for example, “Overadjustment Bias and Unnecessary Adjustment in Epidemiologic Studies”:
Hi, sorry I missed this post earlier. Yes, this is sometimes called overadjustment. Their definition of overadjustment is incomplete—they are missing the case where there is a variable associated with both exposure and outcome, is not an intermediate variable, but adjusting for it increases bias anyways. This case has a different name, M-bias, and occurs for instance in this graph:
A → Y ← H1 → M ← H2 → A
Say we do not observe H1, H2, and A is our exposure (treatment), Y is our outcome. The right thing to do here is to not adjust for M. It’s called “M-bias” because the part of this graph involving H variables kind of looks like an M, if you draw it using the standard convention of unobserved confounders on top.
But there is a wider problem here than this, because sometimes what you are doing is ‘adjusting for confounders,’ but in reality you shouldn’t even be using the formula that adjusting for confounders gives you, but use another formula. This happens for example with longitudinal studies (with a non-genetic treatment that is vulnerable to confounders over time). In such studies you want to use something called the g-computation algorithm instead of adjusting for confounders.
I guess if I were to name the resulting bias, it would be “causal model misspecification bias.” That is, you are adjusting for confounders in a particular way because you think the true causal model is a certain way, but you are wrong about that—the model is actually different and the causal effect requires a different approach from what you are using.
I have a paper with Tyler Vanderweele and Jamie Robins that characterizes exactly what has to be true on the graph for adjustment to be valid for causal effects. So you will get bias from adjustment (for a particular set) if and only if the condition in the paper does not hold for your model.
Ah, I stand corrected, then. Unless intelligence correlates enough with education and socioeconomic status to make the above meaningless, but it’d be weird to control for it if such was the case.
Doesn’t that inverse correlation disappear when you control for schooling?
The correlation does not predict low variation in intelligence, it merely predicts this variation should decrease over time.
IQ has been gaining a lot of importance in evolutionarily recent times, so that positive correlation (and the variation-decreasing trend it predicts) can only have been weaker than it is now.
How much, I could only guess. But intelligence is generally appreciated in potential mates, and that appreciation wouldn’t be there if smartness didn’t improve reproductive success, at least on average.
That depends. First, it depends on the sex (some males actually prefer dumb blondes) and second, females usually like successful males—while intelligence is clearly correlated with success, it’s just a correlate and a signal, not the terminal value itself.
Also, if the floor of survive-and-breed is lowered enough (as is the case in the developed countries), the evolutionary pressures to wash out low intelligence become much weaker.
P.S.
LOL. Here is a study the abstract of which says:
I’ve wondered whether successful “dumb blondes” are actually unintelligent, or pretending—beautiful blondes aren’t common, but they’re still up against a fair amount of competition.
Don’t think it matters much here—the point is that there are guys who prefer not-smart women. If some women feel it necessary to pretend to be stupid, that just reinforces the point.
It might hint that men not caring about women’s intelligence doesn’t have a dysgenic effect.
For having kids with?
I think they would formulate it as “for bearing them kids”. Boys, preferably.
You might be generalizing from one example.
See also the stereotypical jock/nerd dichotomy.
I’m not generalizing from one example. Intelligence is correlated with sense of humor, which is one of the most consistently named traits that make a potential mate attractive, especially a male. source
Everyone I’ve ever heard talk about the jock/nerd thing was a US nerd.
It’s also correlated with success, as Lumifer mentioned elsethread. I thought we were talking about the direct effects of intelligence itself—though, as gwern points out, that doesn’t make too much sense.
IME similar things also happen in Italy (with some differences) and Ireland. I’d guess anywhere people are made to study things they never need outside school by bad teachers, they will resent that and dislike people who can learn such things effortlessly and enjoy it. (Of course I wouldn’t expect that to happen as much in places like say Finland or Singapore.)
There’s also the saying that “correlation does not imply causation”. The brain is very complex and energy intensive; basically anything that messes much with you is going to mess with your brain. For example, a near universal symptom of genetic diseases is reduced intelligence—and I’m going to bet that low intelligence is not the cause of the genetic problem.
Massive variation at a human scale may be microscopic on the scale of potential brains designs. Presumably if the OP is correct you would expect a massive difference of intelligence from even a minor improvement in energy usage
Birds (especially corvids and parrots) seem to pack surprising amounts of intelligence into small brains. Could there be some efficiencies worth studying there?
“The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost”, Herculano-Houzel 2012 doesn’t seem to mention there being anything special about bird brains relative to primate and other brains in either neuronal density or energy efficiency (but I’m not sure to what extent they looked).
You should add a link to Gwerns very related Algernon Argument.
My understanding was that when humans learned to cook, this essentially allowed us to outsource our digestion to fire, use fewer calories digesting, and thus dramatically increase the net number of calories we got from eating (or something like that), and that this energy jackpot played a big role in our evolution of larger brains.
Anyway, interesting post. RomeoStevens suggested consuming foods high in glucose the other day when I was complaining about being tired, and that actually seemed to work pretty well. So I’m eager to hear about research along these lines. Maybe someone could design a device that monitors your bloodstream for nutrients and injects new ones in order to optimize nutrient ratios or a new food that contains nutrients, hormones, etc. in exactly the ratio your brain needs them?
If you’re getting hypoglycemic, sugar will certainly help and sugar highs are real. However I don’t think this is a good idea long-term as there is considerable amount of evidence that eating a lot of refined carbs tends to lead to a variety of unpleasant diseases, most prominently the metabolic syndrome.
Perhaps sugar and glucose only cause problems in those who can’t metabolize it effectively, for other reasons (or when consumed in excess of your capacity to metabolize them). I used to think high glucose intake caused metabolic syndrome but I can’t reconcile that with the existence of large groups of people (Kitavans, fruitarianism, etc.) that have very high carb diets and don’t develop metabolic syndrome.
In mice, high sugar diets don’t cause metabolic problems or liver damage unless also coupled with high polyunsaturated fat intake. Populations of people with high carbohydrate diets and no metabolic syndrome seem to have very low polyunsaturated fat intake. Could carb restriction protect against the symptoms of metabolic disease, without addressing the underlying cause?
Fructose seems to increase T3 production in the liver, which could be a mechanism behind the sugar high. I am not sure if this is “good” or “bad.”
What do you mean by “metabolize effectively”? People who have problems with metabolizing glucose are usually known as “dead” and things like insulin resistance are a problem with signalling, not a problem with cells’ aerobic metabolism.
The metabolic syndrome is clearly multi-factor, there is no single cause we can point to. I am not sure PUFAs (or fructose) are the magic ingredient either, and it looks quite likely that the amount of physical exercise plays a fairly major role in all this—but the whole thing is pretty messy at the moment. Sorting out all the causal connections leading to the metabolic syndrome is probably Nobel material :-/
There can be damage or defects in the mitochondria that inhibit it’s ability to respond to hormones, for example per-oxidation of cardiolipin. Cells don’t always die when they have a mitochondrial defect. An extreme example is cancer, where there is major damage to the mitochondria, but the cells continue to live via anaerobic fermentation (aka the Warburg Effect).
Some review articles that talk about these theories:
Cancer as a metabolic disease: implications for novel therapeutics
Life and Death: Metabolic Rate, Membrane Composition, and Life Span of Animals
Role of cardiolipin peroxidation and Ca2+ in mitochondrial dysfunction and disease
Sure, but what’s the prevalence of this kind of problems? Sugar and/or glucose can cause problems in a significant chunk of population, maybe even the majority. Are you saying all these people have damaged mitochondria?
I need to learn more about this, I don’t have a strong belief. If I understand correctly, this is basically the idea behind the free radical theory of aging (FRTA). One interesting variant of that idea is in the article I linked above, which suggests that the focus should be on “mitochondrial membrane peroxidizability index” rather than antioxidant activity or free radical production.
It seems weird that sugar seems to cause problems in certain populations of people, but not others.
Glucose != sugar. Romeo recommended dried apricots and carrots w/ hummus.
Plain sugar is sucrose which consists of equal number of glucose and fructose molecules. It is absorbed from your gut as glucose and fructose. Starch, by the way, is basically a polymer of glucose (technically, a polysaccharide) and is absorbed from your gut mostly as glucose, though your gut flora is an active participant in the process.
Dried apricots are largely sugar (53% by weight is sugars), carrots are mostly water with some fiber and a bit of starch, not much, and with hummus it depends, but it’s a mix of carbs, proteins, and fat.
It doesn’t make sense to call things good or bad without the context of the person’s diet. Certainly the average person could stand to eat less sugar. Some people are on very low carb diets though and could stand to eat a bit more.
Although I don’t know the context of your original recommendation, using sugar rushes as pick-me-uppers really doesn’t sound like a good idea to me.
Snacking on fruits doesn’t induce sugar rushes unless you have a completely messed up insulin response.
Without going into personal recommendations which should be personal—the rules for an 8%-body-fat marathoner are different from the rules for a couch potato who is a potato both in activity levels and in shape—I still don’t think it’s a good idea to get into the habit of treading tiredness and low energy with sugary snacks.
It may have worked well in the preindustrial times when we had to run ten miles in the snow uphill both ways just to get to our shoes. But nowadays when there is a noticeable surplus of tasty sugar and a noticeable shortage of physical exertion, getting used to snacking on carbs for a boost of energy doesn’t sound wise to me.
I object to the framing. I suggested that someone who did not currently eat the recommended 5 servings of fruits to up their fruit intake. They did so and felt better. But we’ve already had this argument.
That was about fructose, this one is about glucose :-D
By the way, why do you consider the whole “recommended servings” things to be anything more than nonsense? As far as I know these recommendations are written by the agriculture and food industry lobby with the idea of shaping consumption preferences.
I only care about servings to the extent that they are used as the increments in studies on health. They are then used as recommendations because food packaging is sold in those units, making it easier on the end user. Most people don’t want to weigh their food.
Huh? Fruits and vegetables are sold in serving-size units..? What are you talking about?
Fruit comes in discrete units. It’s generally advisable to give people an idea what 5 servings per day looks like.
I agree if you’re thinking of apples or oranges, but ADBOC if you mean all fruit, including cherries or watermelons.
To Casiothesane and others, or anyone reading in the future, it’s probably bad form to comment on a 6 year-old post and also, probably not fair to the opinions of those who have had six years to emend them, even if they still have the same views they may express them more persuasively, now, or in ways I could not anticipate. That qualification given, I suggest people interested in this topic look into the work of Thoke, Olsen, Bagatolli and colleagues, who have made progress on Ling’s AI theory in the last three years (like Google those names and “water” “glycolysis” “ACDAN”). You will find these issues are almost certainly more complicated than some of the comments skeptical of Ling propose. Also, I will just say with too-broad-of-strokes that Ling exhaustively pours over the criticisms mentioned in these comments, for example regarding the “high energy” phosphate bond, in his books, which are for the most part available, detailing the experiments of himself and tens of other labs. They are not based upon one or two theoretical drawings or speculations, I could be wrong to assume they were taken like that.
I don’t expect any of this to be convincing, and hope this comment meets the guidelines. I made an account just to make this comment. My point is just that if you look into Ling’s evidence, which was not just from one or few experiments, I believe you will see these issues are far more controversial than many here are letting on. Of course some of your experiments work, on a daily basis; what is at issue is why they work the way they do, and whether a different “why” would lead to new testable ideas in the field. I also made significant progress on Ling’s work in my forthcoming dissertation, and if anyone out there still sees this post live or interesting, I may introduce those data in a future comment.
I think you are right but it isn’t as obvious as that. See Roy Baumeister and the thesis that will power is due to blood glucose levels.
He said “distinct”, not “unrelated”.
Brains use less energy when they’re doing familiar skills than when they’re learning. Perhaps there’s some energy to be freed up if learning can be made more efficient.
How do you know? As far as I know the textbooks say that brain energy use is relatively constant regardles of the mental acitivity in which one engages.
http://www.sciencedaily.com/releases/2013/08/130804144503.htm
I suppose that with the many scandals about scientific research, I can’t say that I know it’s correct, but let’s just say it’s both plausible and I’ve heard it more than once from sources which might be reliable.
The article doesn’t talk about whole brain activity but about the activity of a particular part of the brain. It’s not clear that the whole brain uses less energy.
Fair point. This should be checked. On the other hand, does it seem likely that if the brain uses less energy in one part because it’s doing a familiar task, the rest of the brain will use more energy to compensate?
It could be simply that when the brain does a familiar task other brain regions are activated then when it’s not doing a familiar task.
Given what I learned in university physiology lectures the body also regulates to try to keep the amount of energy used by the brain constant. The brain is complex.
I don’t see how
leads to the assertion that intelligence is cell-energy limited and that increasing brain metabolism (on timescales shorter than evolutionary) will lead to increased IQ. In particular, I don’t know of evidence that starving people become stupid.
I think this would be difficult to observe, because starvation also increases stress hormones which increase motivation. For the most part, brain metabolism remains high in starvation, but other glucose using systems are reduced, to preserve glucose for the brain. Ketones are also used to reduce glucose demands while keeping brain metabolism high. Ancedotally, I seem to be more motivated when fasting or dieting, but more creative when eating a nutritious diet.
Metabolic syndrome, and hypothyroidism (both diseases of decreased cellular energy production) are correlated with reduced intelligence.
Citations:
Intelligence quotient in children with congenital hypothyroidism: The effect of diagnostic and treatment variables.
Pre-morbid intelligence, the metabolic syndrome and mortality: the Vietnam Experience Study.
Contribution of Metabolic Syndrome Components to Cognition in Older Individuals
Cognitive function in non-demented older adults with hypothyroidism.
Hypothyroidism obviously has a lot more consequences than just decreased cellular energy production and as to the metabolic syndrome, I’m not sure I’d call it a “disease of decreased cellular energy production” at all, it looks much more like a breakdown of some regulatory systems to me.
Of course, there are many ways to harm brain (=reduce intelligence) which do not involve limiting ATP in the cells.
Personally, I would expect a starving person to perform significantly worse on a math test than a non-starving person. Heck, I would expect a very hungry person to perform significantly worse on a math test than a non-very-hungry person. I’ve been hungry during the course of today and I think my mental performance was worse when I was hungry. When I’m hungry, it feels like my brain has an aversion to thinking hard, presumably in order to get me to preserve energy.
https://freethoughtblogs.com/pharyngula/2022/10/24/one-way-to-be-less-wrong-is-to-avoid-faulty-premises/