It seems immoral, maybe depending on details. Depending on how humanized the neurons are, and what you do with the pigs (especially the part where human thinking could get trained into them!), you might be creating moral patients and then maiming and torturing them.
It has a very high ick factor. I mean, I’m icked out by it; you’re creating monstrosities.
I assume it has a high taboo factor.
It doesn’t seem that practical. I don’t immediately see an on-ramp for the innovation; in other words, I don’t see intermediate results that would be interesting or useful, e.g. in an academic or commercial context. That’s in contrast to germline engineering or brain-brain interfaces, which have lots of component technologies and partial successes that would be useful and interesting. Do you see such things here?
Further, it seems far far less scalable than other methods. That means you get way less adoption, which means you get way fewer geniuses. Also, importantly, it means that complaints about inequality become true. With, say, germline engineering, anyone who can lease-to-own a car can also have genetically super healthy, sane, smart kids. With networked-modified-pig-brain-implant-farm-monster, it’s a very niche thing only accessible to the rich and/or well-connected. Or is there a way this eventually results in a scalable strong intelligence boost?
You probably get something like 10000x as much brain tissue per dollar using pigs than neurons in a petri dish.
That’s compelling though, for sure.
On the other hand, the quality is going to be much lower compared to human brains. (Though presumably higher quality compared to in vitro brain tissue.) My guess is that quality is way more important in our context. I wouldn’t think so as strongly if connection bandwidth were free; in that case, plausibly you can get good work out of the additional tissue. Like, on one end of the spectrum of “what might work”, with low-quality high-bandwidth, you’re doing something like giving each of your brain’s microcolumns an army of 100 additional, shitty microcolumns for exploration / invention / acceleration / robustness / fan-out / whatever. On the other end, you have high-quality low-bandwidth: multiple humans connected together, and it’s maybe fine that bandwidth is low because both humans are capable of good thinking on their own. But low-quality low-bandwith seems like it might not help much—it might be similar to trying to build a computer by training pigs to run around in certain patterns.
How important is it to humanize the neurons, if the connections to humans will be remote by implant anyway? Why use pigs rather than cows? (I know people say pigs are smarter, but that’s also more of a moral cost; and I’m wondering if that actually matters in this context. Plausibly the question is really just, can you get useful work out of an animal’s brain, at all; and if so, a normal cow is already “enough”.)
We see pretty significant changes in ability of humans when their brain volume changes only a bit. I think if you can 10x the effective brain volume, even if the additional ‘regions’ are of lower quality, you should expect some dramatic effects. My guess is that if it works at all, you get at least 7 SDs of sudden improvement over a month or so of adaptation, maybe more.
As I explained, I think evidence from the human connectome shows that bandwidth is not an issue. We should be able to supply plenty of bandwidth.
I continue to find it strange that you are so convinced that computer simulations of neurons would be insufficient to provide benefit. I’d definitely recommend that before trying to network animal brains to a human. In that case, you can do quite a lot of experimentation with a lot of different neuronal models and machine learning models as possible boosters for just a single human. It’s so easy to change what program the computer is running, and how much compute you have hooked up. Seems to me you should prove that this doesn’t work before even considering going the animal brain route. I’m confident that no existing research has attempted anything like this, so we have no empirical evidence to show that it wouldn’t work. Again, even if each simulated cortical column is only 1% as effective (which seems like a substantial underestimate to me), we’d be able to use enough compute that we could easily simulate 1000x extra.
Have you watched videos of the first neuralink patient using a computer? He has great cursor control, substantially better than previous implants have been able to deliver. I think this is strong evidence that the implant tech is at acceptable performance level.
I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
Gotcha. Yeah, I think these strategies probably just don’t work.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it.
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
exogenous driving of a fraction of cortical tissue to result in suffering of the subjects
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!
That’s creative. But
It seems immoral, maybe depending on details. Depending on how humanized the neurons are, and what you do with the pigs (especially the part where human thinking could get trained into them!), you might be creating moral patients and then maiming and torturing them.
It has a very high ick factor. I mean, I’m icked out by it; you’re creating monstrosities.
I assume it has a high taboo factor.
It doesn’t seem that practical. I don’t immediately see an on-ramp for the innovation; in other words, I don’t see intermediate results that would be interesting or useful, e.g. in an academic or commercial context. That’s in contrast to germline engineering or brain-brain interfaces, which have lots of component technologies and partial successes that would be useful and interesting. Do you see such things here?
Further, it seems far far less scalable than other methods. That means you get way less adoption, which means you get way fewer geniuses. Also, importantly, it means that complaints about inequality become true. With, say, germline engineering, anyone who can lease-to-own a car can also have genetically super healthy, sane, smart kids. With networked-modified-pig-brain-implant-farm-monster, it’s a very niche thing only accessible to the rich and/or well-connected. Or is there a way this eventually results in a scalable strong intelligence boost?
That’s compelling though, for sure.
On the other hand, the quality is going to be much lower compared to human brains. (Though presumably higher quality compared to in vitro brain tissue.) My guess is that quality is way more important in our context. I wouldn’t think so as strongly if connection bandwidth were free; in that case, plausibly you can get good work out of the additional tissue. Like, on one end of the spectrum of “what might work”, with low-quality high-bandwidth, you’re doing something like giving each of your brain’s microcolumns an army of 100 additional, shitty microcolumns for exploration / invention / acceleration / robustness / fan-out / whatever. On the other end, you have high-quality low-bandwidth: multiple humans connected together, and it’s maybe fine that bandwidth is low because both humans are capable of good thinking on their own. But low-quality low-bandwith seems like it might not help much—it might be similar to trying to build a computer by training pigs to run around in certain patterns.
How important is it to humanize the neurons, if the connections to humans will be remote by implant anyway? Why use pigs rather than cows? (I know people say pigs are smarter, but that’s also more of a moral cost; and I’m wondering if that actually matters in this context. Plausibly the question is really just, can you get useful work out of an animal’s brain, at all; and if so, a normal cow is already “enough”.)
We see pretty significant changes in ability of humans when their brain volume changes only a bit. I think if you can 10x the effective brain volume, even if the additional ‘regions’ are of lower quality, you should expect some dramatic effects. My guess is that if it works at all, you get at least 7 SDs of sudden improvement over a month or so of adaptation, maybe more.
As I explained, I think evidence from the human connectome shows that bandwidth is not an issue. We should be able to supply plenty of bandwidth.
I continue to find it strange that you are so convinced that computer simulations of neurons would be insufficient to provide benefit. I’d definitely recommend that before trying to network animal brains to a human. In that case, you can do quite a lot of experimentation with a lot of different neuronal models and machine learning models as possible boosters for just a single human. It’s so easy to change what program the computer is running, and how much compute you have hooked up. Seems to me you should prove that this doesn’t work before even considering going the animal brain route. I’m confident that no existing research has attempted anything like this, so we have no empirical evidence to show that it wouldn’t work. Again, even if each simulated cortical column is only 1% as effective (which seems like a substantial underestimate to me), we’d be able to use enough compute that we could easily simulate 1000x extra.
Have you watched videos of the first neuralink patient using a computer? He has great cursor control, substantially better than previous implants have been able to deliver. I think this is strong evidence that the implant tech is at acceptable performance level.
I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better. It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well. Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
Gotcha. Yeah, I think these strategies probably just don’t work.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
FWIW, I wouldn’t expect the exogenous driving of a fraction of cortical tissue to result in suffering of the subjects.
I do agree that having humanized neurons being driven in human thought patterns makes it weird from an ethical standpoint.
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!