As someone who spent a few years researching this direction intensely before deciding to go work on AI alignment directly (the opposite direction you’ve gone!), I can’t resist throwing in my two cents.
I think germline engineering could do a lot, if we had the multiple generations to work with. As I’ve told you, I don’t think we have anything like near enough time for a single generation (much less five or ten).
I think direct brain tissue implantation is harder even than you imagine. Getting the neurons wired up right in an adult brain is pretty tricky. Even when people do grow new axons and dendrites after an injury to replace a fraction of their lost tissue, this sometimes goes wrong and makes things worse. Misconnected neurons are more of a problem than too few neurons.
I think there’s a lot more potential in brain-computer-interfaces than you are giving them credit for, and an application you haven’t mentioned.
Some things to consider here:
The experiments that have been tried in humans have been extremely conservative, aiming to fix problems in the most well-understood but least-relevant-to-intelligence areas of the brain (sensory input, motor output). In other words, weaksauce by design. We really don’t have any experiments that tell us how much intelligence amplification we could get from having someone’s math/imagination/science/grokking/motivation areas hooked up to a feedback loop with a computer running some sort of efficient brain tissue simulator. I’m guessing that this could actually do some impressive augmentation using current tech (e.g. neuralink). The weak boring non-intelligence-relevant results you see in the literature are very much by design of the scientists, acting in socially reasonable ways to pursue cautious incremental science or timid therapeutics. This is not evidence that the tech itself is actually this limited.
BCI implants have also usually been targeted at having detailed i/o for a very specific area, not the properly distributed i/o you’d want for networking brains together. This is the case both for therapeutic human implants, and for scientific implants where scientists are trying to measure detailed things about a very specific spot in the brain.
For a proper readout, a good example is the webbing of many sensors laid over a large area of cortex in human patients who are being treated for epilepsy. The doctors need to find the specific source of the epileptic cascade so they can treat that particular root cause. Thus, they need to simultaneously monitor many brain regions. What you need is an implant designed more at this multi-area scale.
Growing brain tissue in a vat is relatively hard and expensive compared to growing brain tissue in an animal. Also, it’s going to be less well-ordered neural nets, which matters a lot. Well organized cortical microcolumns work well, disordered brain tissue works much less well.
Given this, I think it makes a lot more sense to focus on animal brains. Pigs have reasonably sized brains for a domestic animal, and are relatively long-lived (compared to mice for example) and easy to produce lots of in a controlled environment. It’s also not that hard to swap in a variety of genes related to neural development to get an animal to grow neurons that are much more similar to human neurons. This has been done a bunch in mice, for various different sets of genes. So, create a breeding line of pigs with humanized neurons, raise up a bunch of them in a secure facility, and give a whole bunch of juveniles BCI implants all at once. (Juvenile is important, since brain circuit flexibility is much higher in young animals.) You probably get something like 10000x as much brain tissue per dollar using pigs than neurons in a petri dish. Also, we already have pig farms that operate at the appropriate scale, but definitely don’t have brain-vat-factories at scale.
Once you have a bunch of test animals with BCI implants into various areas of their brains, you can try many different experiments with the same animals and hardware. You can try making networks where the animals are given boosts from computer models, where many animals are networked together, where there is a ‘controller’ model on the computer which tries to use the animals as compute resources (either separately or networked together). If you also have a human volunteer with BCI implants in the correct parts of their brain, you can have them try to use the pigs.
Whether or not a target subject is controlling or being controlled is dependent on a variety of factors like which areas the signals are being input into versus read from, and the timing thereof. You can have balanced networks, or controller/subject networks.
4. My studies of the compute graph of the brain based on recent human connectome research data (for AI alignment purposes), show that actually you can do everything you need with current 10^4 scale connection implant tech. You don’t need 10^6 scale tech. Why? Because the regions of the brain are themselves connected at 10^2 − 10^4 scale (fewer connections for more distant areas, the most connections for immediately adjacent and highly related areas like V1 to V2). The long range axons that communicate between brain regions are shockingly few. The brain seems to work fine with this level of information compression into and out of all its regions. Thus, if you had say, 6 different implants each with 10^3 − 10^4 connectivity, and had those 6 implants placed in key areas for intelligence (rather than weaksauce sensory or motor areas), that’s probably all you need. It’s common practice to insert 10-15 implants (sometimes even more if needed) into the brain of an epilepsy patient in order to figure out where the surgeons need to cut. The tech is there, and works plenty well.
It seems immoral, maybe depending on details. Depending on how humanized the neurons are, and what you do with the pigs (especially the part where human thinking could get trained into them!), you might be creating moral patients and then maiming and torturing them.
It has a very high ick factor. I mean, I’m icked out by it; you’re creating monstrosities.
I assume it has a high taboo factor.
It doesn’t seem that practical. I don’t immediately see an on-ramp for the innovation; in other words, I don’t see intermediate results that would be interesting or useful, e.g. in an academic or commercial context. That’s in contrast to germline engineering or brain-brain interfaces, which have lots of component technologies and partial successes that would be useful and interesting. Do you see such things here?
Further, it seems far far less scalable than other methods. That means you get way less adoption, which means you get way fewer geniuses. Also, importantly, it means that complaints about inequality become true. With, say, germline engineering, anyone who can lease-to-own a car can also have genetically super healthy, sane, smart kids. With networked-modified-pig-brain-implant-farm-monster, it’s a very niche thing only accessible to the rich and/or well-connected. Or is there a way this eventually results in a scalable strong intelligence boost?
You probably get something like 10000x as much brain tissue per dollar using pigs than neurons in a petri dish.
That’s compelling though, for sure.
On the other hand, the quality is going to be much lower compared to human brains. (Though presumably higher quality compared to in vitro brain tissue.) My guess is that quality is way more important in our context. I wouldn’t think so as strongly if connection bandwidth were free; in that case, plausibly you can get good work out of the additional tissue. Like, on one end of the spectrum of “what might work”, with low-quality high-bandwidth, you’re doing something like giving each of your brain’s microcolumns an army of 100 additional, shitty microcolumns for exploration / invention / acceleration / robustness / fan-out / whatever. On the other end, you have high-quality low-bandwidth: multiple humans connected together, and it’s maybe fine that bandwidth is low because both humans are capable of good thinking on their own. But low-quality low-bandwith seems like it might not help much—it might be similar to trying to build a computer by training pigs to run around in certain patterns.
How important is it to humanize the neurons, if the connections to humans will be remote by implant anyway? Why use pigs rather than cows? (I know people say pigs are smarter, but that’s also more of a moral cost; and I’m wondering if that actually matters in this context. Plausibly the question is really just, can you get useful work out of an animal’s brain, at all; and if so, a normal cow is already “enough”.)
We see pretty significant changes in ability of humans when their brain volume changes only a bit. I think if you can 10x the effective brain volume, even if the additional ‘regions’ are of lower quality, you should expect some dramatic effects. My guess is that if it works at all, you get at least 7 SDs of sudden improvement over a month or so of adaptation, maybe more.
As I explained, I think evidence from the human connectome shows that bandwidth is not an issue. We should be able to supply plenty of bandwidth.
I continue to find it strange that you are so convinced that computer simulations of neurons would be insufficient to provide benefit. I’d definitely recommend that before trying to network animal brains to a human. In that case, you can do quite a lot of experimentation with a lot of different neuronal models and machine learning models as possible boosters for just a single human. It’s so easy to change what program the computer is running, and how much compute you have hooked up. Seems to me you should prove that this doesn’t work before even considering going the animal brain route. I’m confident that no existing research has attempted anything like this, so we have no empirical evidence to show that it wouldn’t work. Again, even if each simulated cortical column is only 1% as effective (which seems like a substantial underestimate to me), we’d be able to use enough compute that we could easily simulate 1000x extra.
Have you watched videos of the first neuralink patient using a computer? He has great cursor control, substantially better than previous implants have been able to deliver. I think this is strong evidence that the implant tech is at acceptable performance level.
I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
Gotcha. Yeah, I think these strategies probably just don’t work.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it.
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
exogenous driving of a fraction of cortical tissue to result in suffering of the subjects
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!
This is interesting, but I don’t understand what you’re trying to say and I’m skeptical of the conclusion. How does this square with half the brain being myelinated axons? Are you talking about adult brains or child brains? If you’re up for it, maybe let’s have a call at some point.
Half the brain by >>volume<< being myelinated axons. Myelinated axons are extremely volume-wasteful due to their large width over relatively large distances.
I’m talking about adult brains. Child brains have slightly more axons (less pruning and aging loss has occurred), but much less myelination.
Growing brain tissue in a vat is relatively hard and expensive compared to growing brain tissue in an animal. Also, it’s going to be less well-ordered neural nets, which matters a lot. Well organized cortical microcolumns work well, disordered brain tissue works much less well.
Yep, I agree. I vaguely alluded to this by saying “The main additional obstacle [...] is growing cognitively useful tissue in vitro.”; what I have in mind is stuff like:
Well-organized connectivity, as you say.
Actually emulating 5-minute and 5-day behavior of neurons—which I would guess relies on being pretty neuron-like, including at the epigenetic level. IIUC current in vitro neural organoids are kind of shitty—epigenetically speaking they’re definitely more like neurons than like hepatocytes, but they’re not very close to being neurons.
Appropriate distribution of cell types (related to well-organized connectivity). This adds a whole additional wrinkle. Not only do you have to produce a variety of epigenetic states, but also you have to have them be assorted correctly (different regions, layers, connections, densities...). E.g. the right amount of glial cells...
The experiments that have been tried in humans have been extremely conservative, aiming to fix problems in the most well-understood but least-relevant-to-intelligence areas of the brain (sensory input, motor output). [....] This is not evidence that the tech itself is actually this limited.
Your characterization of the current state of research matches my impressions (though it’s good to hear from someone who knows more). My reasons for thinking BCIs are weaksause have never been about that, though. The reasons are that:
I don’t see any compelling case for anything you can do on a computer which, when you hook it up to a human brain, makes the human brain very substantially better at solving philosophical problems. I can think of lots of cool things you can do with a good BCI, and I’m sure you and others can think of lots of other cool things, but that’s not answering the question. Do you see a compelling case? What is it? (To be more precise, I do see compelling cases for the few areas I mentioned: prosthetic intrabrain connectivity and networking humans. But those both seem quite difficult technically, and would plausibly be capped in their success by connection bandwidth, which is technically difficult to increase.)
It doesn’t seem like we understand nearly as much about intelligence compared to evolution (in a weak sense of “understand”, that includes stuff encoded in the human genome cloud). So stuff that we’ll program in a computer will be qualitatively much less helpful for real human thinking, compared to just copying evolution’s work. (If you can’t see that LLMs don’t think, I don’t expect to make progress debating that here.)
I think that cortical microcolumns are fairly close to acting in a pretty well stereotyped way that we can simulate pretty accurately on a computer. And I don’t think their precise behavior is all that critical. I think actually you could get 80-90% of the effective capacity by simply having a small (10k? 100k? parameter) transformer standing in for each simulated cortical column, rather than a less compute efficient but more biologically accurate simulation.
The tricky part is just setting up the rules for intercolumn connection (excitatory and inhibitory) properly. I’ve been making progress on this in my research, as I’ve mentioned to you in the past.
Interregional connections (e.g. parietal lobe to prefrontal lobe, or V1 to V2) are fewer, and consistent enough between different people, and involve many fewer total connections, so they’ve all been pretty well described by modern neuroscience. The full weighted directed graph is known, along with a good estimate of the variability on the weights seen between individuals.
It’s not the case that the whole brain is involved in each specific ability that a person has. The human brain has a lot of functional localization. For a specific skill, like math or language, there is some distributed contribution from various areas but the bulk of the computation for that skill is done by a very specific area. This means that if you want to increase someone’s math skill, you probably need to just increase that specific known 5% or so of their brain most relevant to math skill by 10x. This is a lot easier than needing to 10x the entire brain.
I don’t know enough to evaluate your claims, but more importantly, I can’t even just take your word for everything because I don’t actually know what you’re saying without asking a whole bunch of followup questions. So hopefully we can hash some of this out on the phone.
An estimation of the absolute number of axons indicates that human cortical areas are sparsely connected
Burke Q. Rosen, Eric Halgren
Abstract
The tracts between cortical areas are conceived as playing a central role in cortical information processing, but their actual numbers have never been determined in humans. Here, we estimate the absolute number of axons linking cortical areas from a whole-cortex diffusion MRI (dMRI) connectome, calibrated using the histologically measured callosal fiber density. Median connectivity is estimated as approximately 6,200 axons between cortical areas within hemisphere and approximately 1,300 axons interhemispherically, with axons connecting functionally related areas surprisingly sparse. For example, we estimate that <5% of the axons in the trunk of the arcuate and superior longitudinal fasciculi connect Wernicke’s and Broca’s areas. These results suggest that detailed information is transmitted between cortical areas either via linkage of the dense local connections or via rare, extraordinarily privileged long-range connections.
Interregional connections (e.g. parietal lobe to prefrontal lobe, or V1 to V2) are fewer, and consistent enough between different people, and involve many fewer total connections, so they’ve all been pretty well described by modern neuroscience.
Wait are you saying that not only there is quite low long-distance bandwidth, but also relatively low bandwith between neighboring areas? Numbers would be very helpful.
And if there’s much higher bandwidth between neighboring regions, might there not be a lot more information that’s propagating long-range but only slowly through intermediate areas (or would that be too slow or sth?)?
(Relatedly, how crisply does the neocortex factor into different (specialized) regions? (Like I’d have thought it’s maybe sorta continuous?))
I’m glad you’re curious to learn more!
The cortex factors quite crisply into specialized regions. These regions have different cell types and groupings, so were first noticed by early microscope users like Cajal.
In a cortical region, neurons are organized first into microcolumns of 80-100 neurons, and then into macrocolumns of many microcolumns.
Each microcolumn works together as a group to calculate a function. Neighboring microcolumns inhibit each other. So each macrocolumn is sort of a mixture of experts.
The question then is how many microcolumns from one region send an output to a different region. For the example of V1 to V2, basically every microcolumn in V1 sends a connection to V2 (and vise versa). This is why the connection percentage is about 1%. 100 neurons per microcolumn, 1 of which has a long distance axon to V2. The total number of neurons is roughly 10 million, organized into about 100,000 microcolumns.
For areas that are further apart, they send fewer axons. Which doesn’t mean their signal is unimportant, just lower resolution. In that case you’d ask something like “how many microcolumns per macrocolumn send out a long distance axon from region A to region B?” This might be 1, just a summary report of the macrocolumn. So for roughly 10 million neurons, and 100,000 microcolumns organized into around 1000 macrocolumns… You get around 1000 neurons send axons from region A to region B.
More details are in the papers I linked elsewhere in this comment thread.
Yeah I believe what you say about that long-distance connections not that many.
I meant that there might be more non-long-distance connections between neighboring areas. (E.g. boundaries of areas are a bit fuzzy iirc, so macrocolumns towards the “edge” of a region are sorta intertwined with macrocolumns of the other side of the “edge”.) (I thought when you mean V1 to V2 you include those too, but I guess you didn’t?)
Do you think those inter-area non-long-distance connections are relatively unimportant, and if so why?
Here’s a paper about describing the portion of the connectome which is invariant between individual people (basal component), versus that which is highly variant (superstructure):
https://arxiv.org/abs/2012.15854 ## Uncovering the invariant structural organization of the human connectome Anand Pathak, Shakti N. Menon and Sitabhra Sinha (Dated: January 1, 2021)
In order to understand the complex cognitive functions of the human brain, it is essential to study the structural macro-connectome, i.e., the wiring of different brain regions to each other through axonal pathways, that has been revealed by imaging techniques. However, the high degree of plasticity and cross-population variability in human brains makes it difficult to relate structure to function, motivating a search for invariant patterns in the connectivity. At the same time, variability within a population can provide information about the generative mechanisms.
In this paper we analyze the connection topology and link-weight distribution of human structural connectomes obtained from a database comprising 196 subjects. By demonstrating a correspondence between the occurrence frequency of individual links and their average weight across the population, we show that the process by which the human brain is wired is not independent of the process by which the link weights of the connectome are determined. Furthermore, using the specific distribution of the weights associated with each link over the entire population, we show that a single parameter that is specific to a link can account for its frequency of occurrence, as well as, the variation in its weight across different subjects. This parameter provides a basis for “rescaling” the link weights in each connectome, allowing us to obtain a generic network representative of the human brain, distinct from a simple average over the connectomes. We obtain the functional connectomes by implementing a neural mass model on each of the vertices of the corresponding structural connectomes.
By comparing these with the empirical functional brain networks, we demonstrate that the rescaling procedure yields a closer structure-function correspondence. Finally, we show that the representative network can be decomposed into a basal component that is stable across the population and a highly variable superstructure.
As someone who spent a few years researching this direction intensely before deciding to go work on AI alignment directly (the opposite direction you’ve gone!), I can’t resist throwing in my two cents.
I think germline engineering could do a lot, if we had the multiple generations to work with. As I’ve told you, I don’t think we have anything like near enough time for a single generation (much less five or ten).
I think direct brain tissue implantation is harder even than you imagine. Getting the neurons wired up right in an adult brain is pretty tricky. Even when people do grow new axons and dendrites after an injury to replace a fraction of their lost tissue, this sometimes goes wrong and makes things worse. Misconnected neurons are more of a problem than too few neurons.
I think there’s a lot more potential in brain-computer-interfaces than you are giving them credit for, and an application you haven’t mentioned.
Some things to consider here:
The experiments that have been tried in humans have been extremely conservative, aiming to fix problems in the most well-understood but least-relevant-to-intelligence areas of the brain (sensory input, motor output). In other words, weaksauce by design. We really don’t have any experiments that tell us how much intelligence amplification we could get from having someone’s math/imagination/science/grokking/motivation areas hooked up to a feedback loop with a computer running some sort of efficient brain tissue simulator. I’m guessing that this could actually do some impressive augmentation using current tech (e.g. neuralink). The weak boring non-intelligence-relevant results you see in the literature are very much by design of the scientists, acting in socially reasonable ways to pursue cautious incremental science or timid therapeutics. This is not evidence that the tech itself is actually this limited.
BCI implants have also usually been targeted at having detailed i/o for a very specific area, not the properly distributed i/o you’d want for networking brains together. This is the case both for therapeutic human implants, and for scientific implants where scientists are trying to measure detailed things about a very specific spot in the brain.
For a proper readout, a good example is the webbing of many sensors laid over a large area of cortex in human patients who are being treated for epilepsy. The doctors need to find the specific source of the epileptic cascade so they can treat that particular root cause. Thus, they need to simultaneously monitor many brain regions. What you need is an implant designed more at this multi-area scale.
Growing brain tissue in a vat is relatively hard and expensive compared to growing brain tissue in an animal. Also, it’s going to be less well-ordered neural nets, which matters a lot. Well organized cortical microcolumns work well, disordered brain tissue works much less well.
Given this, I think it makes a lot more sense to focus on animal brains. Pigs have reasonably sized brains for a domestic animal, and are relatively long-lived (compared to mice for example) and easy to produce lots of in a controlled environment. It’s also not that hard to swap in a variety of genes related to neural development to get an animal to grow neurons that are much more similar to human neurons. This has been done a bunch in mice, for various different sets of genes. So, create a breeding line of pigs with humanized neurons, raise up a bunch of them in a secure facility, and give a whole bunch of juveniles BCI implants all at once. (Juvenile is important, since brain circuit flexibility is much higher in young animals.) You probably get something like 10000x as much brain tissue per dollar using pigs than neurons in a petri dish. Also, we already have pig farms that operate at the appropriate scale, but definitely don’t have brain-vat-factories at scale.
Once you have a bunch of test animals with BCI implants into various areas of their brains, you can try many different experiments with the same animals and hardware. You can try making networks where the animals are given boosts from computer models, where many animals are networked together, where there is a ‘controller’ model on the computer which tries to use the animals as compute resources (either separately or networked together). If you also have a human volunteer with BCI implants in the correct parts of their brain, you can have them try to use the pigs.
Whether or not a target subject is controlling or being controlled is dependent on a variety of factors like which areas the signals are being input into versus read from, and the timing thereof. You can have balanced networks, or controller/subject networks.
4. My studies of the compute graph of the brain based on recent human connectome research data (for AI alignment purposes), show that actually you can do everything you need with current 10^4 scale connection implant tech. You don’t need 10^6 scale tech. Why? Because the regions of the brain are themselves connected at 10^2 − 10^4 scale (fewer connections for more distant areas, the most connections for immediately adjacent and highly related areas like V1 to V2). The long range axons that communicate between brain regions are shockingly few. The brain seems to work fine with this level of information compression into and out of all its regions. Thus, if you had say, 6 different implants each with 10^3 − 10^4 connectivity, and had those 6 implants placed in key areas for intelligence (rather than weaksauce sensory or motor areas), that’s probably all you need. It’s common practice to insert 10-15 implants (sometimes even more if needed) into the brain of an epilepsy patient in order to figure out where the surgeons need to cut. The tech is there, and works plenty well.
That’s creative. But
It seems immoral, maybe depending on details. Depending on how humanized the neurons are, and what you do with the pigs (especially the part where human thinking could get trained into them!), you might be creating moral patients and then maiming and torturing them.
It has a very high ick factor. I mean, I’m icked out by it; you’re creating monstrosities.
I assume it has a high taboo factor.
It doesn’t seem that practical. I don’t immediately see an on-ramp for the innovation; in other words, I don’t see intermediate results that would be interesting or useful, e.g. in an academic or commercial context. That’s in contrast to germline engineering or brain-brain interfaces, which have lots of component technologies and partial successes that would be useful and interesting. Do you see such things here?
Further, it seems far far less scalable than other methods. That means you get way less adoption, which means you get way fewer geniuses. Also, importantly, it means that complaints about inequality become true. With, say, germline engineering, anyone who can lease-to-own a car can also have genetically super healthy, sane, smart kids. With networked-modified-pig-brain-implant-farm-monster, it’s a very niche thing only accessible to the rich and/or well-connected. Or is there a way this eventually results in a scalable strong intelligence boost?
That’s compelling though, for sure.
On the other hand, the quality is going to be much lower compared to human brains. (Though presumably higher quality compared to in vitro brain tissue.) My guess is that quality is way more important in our context. I wouldn’t think so as strongly if connection bandwidth were free; in that case, plausibly you can get good work out of the additional tissue. Like, on one end of the spectrum of “what might work”, with low-quality high-bandwidth, you’re doing something like giving each of your brain’s microcolumns an army of 100 additional, shitty microcolumns for exploration / invention / acceleration / robustness / fan-out / whatever. On the other end, you have high-quality low-bandwidth: multiple humans connected together, and it’s maybe fine that bandwidth is low because both humans are capable of good thinking on their own. But low-quality low-bandwith seems like it might not help much—it might be similar to trying to build a computer by training pigs to run around in certain patterns.
How important is it to humanize the neurons, if the connections to humans will be remote by implant anyway? Why use pigs rather than cows? (I know people say pigs are smarter, but that’s also more of a moral cost; and I’m wondering if that actually matters in this context. Plausibly the question is really just, can you get useful work out of an animal’s brain, at all; and if so, a normal cow is already “enough”.)
We see pretty significant changes in ability of humans when their brain volume changes only a bit. I think if you can 10x the effective brain volume, even if the additional ‘regions’ are of lower quality, you should expect some dramatic effects. My guess is that if it works at all, you get at least 7 SDs of sudden improvement over a month or so of adaptation, maybe more.
As I explained, I think evidence from the human connectome shows that bandwidth is not an issue. We should be able to supply plenty of bandwidth.
I continue to find it strange that you are so convinced that computer simulations of neurons would be insufficient to provide benefit. I’d definitely recommend that before trying to network animal brains to a human. In that case, you can do quite a lot of experimentation with a lot of different neuronal models and machine learning models as possible boosters for just a single human. It’s so easy to change what program the computer is running, and how much compute you have hooked up. Seems to me you should prove that this doesn’t work before even considering going the animal brain route. I’m confident that no existing research has attempted anything like this, so we have no empirical evidence to show that it wouldn’t work. Again, even if each simulated cortical column is only 1% as effective (which seems like a substantial underestimate to me), we’d be able to use enough compute that we could easily simulate 1000x extra.
Have you watched videos of the first neuralink patient using a computer? He has great cursor control, substantially better than previous implants have been able to deliver. I think this is strong evidence that the implant tech is at acceptable performance level.
I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better. It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well. Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
Gotcha. Yeah, I think these strategies probably just don’t work.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
FWIW, I wouldn’t expect the exogenous driving of a fraction of cortical tissue to result in suffering of the subjects.
I do agree that having humanized neurons being driven in human thought patterns makes it weird from an ethical standpoint.
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!
This is interesting, but I don’t understand what you’re trying to say and I’m skeptical of the conclusion. How does this square with half the brain being myelinated axons? Are you talking about adult brains or child brains? If you’re up for it, maybe let’s have a call at some point.
Half the brain by >>volume<< being myelinated axons. Myelinated axons are extremely volume-wasteful due to their large width over relatively large distances.
I’m talking about adult brains. Child brains have slightly more axons (less pruning and aging loss has occurred), but much less myelination.
Happy to chat at some point.
Yep, I agree. I vaguely alluded to this by saying “The main additional obstacle [...] is growing cognitively useful tissue in vitro.”; what I have in mind is stuff like:
Well-organized connectivity, as you say.
Actually emulating 5-minute and 5-day behavior of neurons—which I would guess relies on being pretty neuron-like, including at the epigenetic level. IIUC current in vitro neural organoids are kind of shitty—epigenetically speaking they’re definitely more like neurons than like hepatocytes, but they’re not very close to being neurons.
Appropriate distribution of cell types (related to well-organized connectivity). This adds a whole additional wrinkle. Not only do you have to produce a variety of epigenetic states, but also you have to have them be assorted correctly (different regions, layers, connections, densities...). E.g. the right amount of glial cells...
Your characterization of the current state of research matches my impressions (though it’s good to hear from someone who knows more). My reasons for thinking BCIs are weaksause have never been about that, though. The reasons are that:
I don’t see any compelling case for anything you can do on a computer which, when you hook it up to a human brain, makes the human brain very substantially better at solving philosophical problems. I can think of lots of cool things you can do with a good BCI, and I’m sure you and others can think of lots of other cool things, but that’s not answering the question. Do you see a compelling case? What is it? (To be more precise, I do see compelling cases for the few areas I mentioned: prosthetic intrabrain connectivity and networking humans. But those both seem quite difficult technically, and would plausibly be capped in their success by connection bandwidth, which is technically difficult to increase.)
It doesn’t seem like we understand nearly as much about intelligence compared to evolution (in a weak sense of “understand”, that includes stuff encoded in the human genome cloud). So stuff that we’ll program in a computer will be qualitatively much less helpful for real human thinking, compared to just copying evolution’s work. (If you can’t see that LLMs don’t think, I don’t expect to make progress debating that here.)
I think that cortical microcolumns are fairly close to acting in a pretty well stereotyped way that we can simulate pretty accurately on a computer. And I don’t think their precise behavior is all that critical. I think actually you could get 80-90% of the effective capacity by simply having a small (10k? 100k? parameter) transformer standing in for each simulated cortical column, rather than a less compute efficient but more biologically accurate simulation.
The tricky part is just setting up the rules for intercolumn connection (excitatory and inhibitory) properly. I’ve been making progress on this in my research, as I’ve mentioned to you in the past.
Interregional connections (e.g. parietal lobe to prefrontal lobe, or V1 to V2) are fewer, and consistent enough between different people, and involve many fewer total connections, so they’ve all been pretty well described by modern neuroscience. The full weighted directed graph is known, along with a good estimate of the variability on the weights seen between individuals.
It’s not the case that the whole brain is involved in each specific ability that a person has. The human brain has a lot of functional localization. For a specific skill, like math or language, there is some distributed contribution from various areas but the bulk of the computation for that skill is done by a very specific area. This means that if you want to increase someone’s math skill, you probably need to just increase that specific known 5% or so of their brain most relevant to math skill by 10x. This is a lot easier than needing to 10x the entire brain.
I don’t know enough to evaluate your claims, but more importantly, I can’t even just take your word for everything because I don’t actually know what you’re saying without asking a whole bunch of followup questions. So hopefully we can hash some of this out on the phone.
Sorry that my attempts to communicate technical concepts don’t always go smoothly!
I keep trying to answer your questions about ‘what I think I know and how I think I know it’ with dumps of lists of papers. Not ideal!
But sometimes I’m not sure what else to do, so.… here’s a paper!
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001575
An estimation of the absolute number of axons indicates that human cortical areas are sparsely connected
Burke Q. Rosen, Eric Halgren
Abstract
The tracts between cortical areas are conceived as playing a central role in cortical information processing, but their actual numbers have never been determined in humans. Here, we estimate the absolute number of axons linking cortical areas from a whole-cortex diffusion MRI (dMRI) connectome, calibrated using the histologically measured callosal fiber density. Median connectivity is estimated as approximately 6,200 axons between cortical areas within hemisphere and approximately 1,300 axons interhemispherically, with axons connecting functionally related areas surprisingly sparse. For example, we estimate that <5% of the axons in the trunk of the arcuate and superior longitudinal fasciculi connect Wernicke’s and Broca’s areas. These results suggest that detailed information is transmitted between cortical areas either via linkage of the dense local connections or via rare, extraordinarily privileged long-range connections.
Wait are you saying that not only there is quite low long-distance bandwidth, but also relatively low bandwith between neighboring areas? Numbers would be very helpful.
And if there’s much higher bandwidth between neighboring regions, might there not be a lot more information that’s propagating long-range but only slowly through intermediate areas (or would that be too slow or sth?)?
(Relatedly, how crisply does the neocortex factor into different (specialized) regions? (Like I’d have thought it’s maybe sorta continuous?))
I’m glad you’re curious to learn more! The cortex factors quite crisply into specialized regions. These regions have different cell types and groupings, so were first noticed by early microscope users like Cajal. In a cortical region, neurons are organized first into microcolumns of 80-100 neurons, and then into macrocolumns of many microcolumns. Each microcolumn works together as a group to calculate a function. Neighboring microcolumns inhibit each other. So each macrocolumn is sort of a mixture of experts. The question then is how many microcolumns from one region send an output to a different region. For the example of V1 to V2, basically every microcolumn in V1 sends a connection to V2 (and vise versa). This is why the connection percentage is about 1%. 100 neurons per microcolumn, 1 of which has a long distance axon to V2. The total number of neurons is roughly 10 million, organized into about 100,000 microcolumns.
For areas that are further apart, they send fewer axons. Which doesn’t mean their signal is unimportant, just lower resolution. In that case you’d ask something like “how many microcolumns per macrocolumn send out a long distance axon from region A to region B?” This might be 1, just a summary report of the macrocolumn. So for roughly 10 million neurons, and 100,000 microcolumns organized into around 1000 macrocolumns… You get around 1000 neurons send axons from region A to region B.
More details are in the papers I linked elsewhere in this comment thread.
Thanks!
Yeah I believe what you say about that long-distance connections not that many.
I meant that there might be more non-long-distance connections between neighboring areas. (E.g. boundaries of areas are a bit fuzzy iirc, so macrocolumns towards the “edge” of a region are sorta intertwined with macrocolumns of the other side of the “edge”.)
(I thought when you mean V1 to V2 you include those too, but I guess you didn’t?)
Do you think those inter-area non-long-distance connections are relatively unimportant, and if so why?
Here’s a paper about describing the portion of the connectome which is invariant between individual people (basal component), versus that which is highly variant (superstructure):
https://arxiv.org/abs/2012.15854
## Uncovering the invariant structural organization of the human connectome
Anand Pathak, Shakti N. Menon and Sitabhra Sinha
(Dated: January 1, 2021)
In order to understand the complex cognitive functions of the human brain, it is essential to study
the structural macro-connectome, i.e., the wiring of different brain regions to each other through axonal pathways, that has been revealed by imaging techniques. However, the high degree of plasticity and cross-population variability in human brains makes it difficult to relate structure to function, motivating a search for invariant patterns in the connectivity. At the same time, variability within a population can provide information about the generative mechanisms.
In this paper we analyze the connection topology and link-weight distribution of human structural connectomes obtained from a database comprising 196 subjects. By demonstrating a correspondence between the occurrence frequency of individual links and their average weight across the population, we show that the process by which the human brain is wired is not independent of the process by which the link weights of the connectome are determined. Furthermore, using the specific distribution of the weights associated with each link over the entire population, we show that a single parameter that is specific to a link can account for its frequency of occurrence, as well as, the variation in its weight across different subjects. This parameter provides a basis for “rescaling” the link weights in each connectome, allowing us to obtain a generic network representative of the human brain, distinct from a simple average over the connectomes. We obtain the functional connectomes by implementing a neural mass model on each of the vertices of the corresponding structural connectomes.
By comparing these with the empirical functional brain networks, we demonstrate that the rescaling procedure yields a closer structure-function correspondence. Finally, we show that the representative network can be decomposed into a basal component that is stable across the population and a highly variable superstructure.