I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
Gotcha. Yeah, I think these strategies probably just don’t work.
It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it.
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
exogenous driving of a fraction of cortical tissue to result in suffering of the subjects
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!
I don’t think the moral cost is relevant if the thing you are comparing it too is saving the world, and making lots of human and animal lives much better. It seems less problematic to me than a single ordinary pig farm, since you’d be treating these pigs unusually well. Weird that you’d feel good about letting the world get destroyed in order to have one fewer pig farm in it. Are you reasoning from Copenhagen ethics? That approach doesn’t resonate with me, so maybe that’s why I’m confused.
It is quite impractical. A weird last ditch effort to save the world. It wouldn’t be scalable, you’d be enhancing just a handful of volunteers who would then hopefully make rapid progress on alignment.
To get a large population of people smarter, polygenic selection seems much better. But slow.
The humanization isn’t critical, and it isn’t for the purposes of immune-signature matching. It’s human genes related to neural development, so that the neurons behave more like human neurons (e.g. forming 10x more synapses in the cortex).
Pigs are a better cost-to-brain-matter ratio.
I wasn’t worrying about animal suffering here, like I said above.
Gotcha. Yeah, I think these strategies probably just don’t work.
The moral differences are:
Humanized neurons.
Animals with parts of their brains being exogenously driven; this could cause large amounts of suffering.
Animals with humanized thinking patterns (which is part of how the scheme would be helpful in the first place).
Where did you get the impression that I’d feel good about, or choose, that? My list of considerations is a list of considerations.
That said, I think morality matters, and ignoring morality is a big red flag.
Separately, even if you’re pretending to be a ruthless consequentialist, you still want to track morality and ethics and ickyness, because it’s a very strong determiner of whether or not other people will want to work on something, which is a very strong determiner of success or failure.
Yes, fair enough. I’m not saying that clearly immoral things should be on the table. It just seems weird to me that this is something that seems approximately equivalent to a common human activity (raising and killing pigs) that isn’t widely considered immoral.
FWIW, I wouldn’t expect the exogenous driving of a fraction of cortical tissue to result in suffering of the subjects.
I do agree that having humanized neurons being driven in human thought patterns makes it weird from an ethical standpoint.
My reason is that suffering in general seems related to [intentions pushing hard, but with no traction or hope]. A subspecies of that is [multiple drives pushing hard against each other, with nobody pulling the rope sideways]. A new subspecies would be “I’m trying to get my brain tissue to do something, but it’s being externally driven, so I’m just scrabbling my hands futilely against a sheer blank cliff wall.” and “Bits of my mind are being shredded because I create them successfully by living and demanding stuff of my brain, but the the bits are exogenously driven / retrained and forget to do what I made them to do.”.
Really hard to know without more research on the subject.
My subjective impression from working with mice and rats is that there isn’t a strong negative reaction to having bits of their cortex stimulated in various ways (electrodes, optogenetics).
Unlike, say, experiments where we test their startle reaction by placing them in a small cage with a motion sensor and then playing a loud startling sound. They hate that!