True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
Along similar lines I was going to propose that it should be considered moral to test on “Low IQ Jocks as soon as they finish High School”. After all they have finished their glory years and are different to me in similar ways to how I am different to a chimpanzee. But I decided not to post because I decided it was dangerous to go anywhere near a space including “different” and “less moral consideration”.
I agree that it’s dangerous, but I think any remotely productive use of this thread is going to have to. If we’re not asking that question, we’re not asking the right questions.
True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I do think chimps and humans are different; but most members of PETA probably believe they are more different than I do. I think you’re reading positions into my post that aren’t there.
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
I advocated alternatively testing on humans like myself.
I apologize, I was focusing on a lot of the comments and missed that you had made that point.
I don’t currently know what the rules are for human testing. I think it should be theoretically possible for humans to submit themselves for whatever testing they want, but I also think that as soon as that market exists, there will be those who attempt to exploit it in ways I’d consider unethical. That’s a complex issue that I don’t have an opinion on yet.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
It is not clear to me how that avoids the issue of including the future.
It avoids the issue of including the future of particular people. Some people care about that, others don’t, but it reduces the range of reasons you might object to the comparison.
From what I know, I personally weight chimps as maybe 1⁄3 times as morally significant as humans. I’m sometimes willing to sacrifice humans to save other humans, and I’d sacrifice a chimp to save about 1⁄3 as many humans. (I’d also sacrifice a human to save 3x as many chimps). This is mostly an intuitive belief. I can imagine myself changing the number to something as low as 1/10th, maybe even as low as 1/100th (I don’t expect to drop it that far).
It’s important to note, though, that I DON’T sacrifice humans on a 1-for-1 trade off without their consent. I don’t want to live in a world where someone can sacrifice me without me having a say in the matter. There may be cases where I’m willing to consent to sacrifice. I’m not sure if I can identify them right now.
There are still circumstances where, while pissed, I’d grudgingly accept that the Mastermind doing the sacrificing was right to do so. (If they had to divert a train that was going to kill a lot of people, for example. Probably more than 5 though). The number of lives saved to be worth it also has to consider how perfect the information is, and the likelihood that the sacrificer isn’t running on damaged hardware.
So theoretically, I’m okay with sacrificing chimps to save arbitrarily large numbers of people, but because the chimps CAN’T consent, I’d have to be willing to sacrifice somewhere between 1⁄3 and 1/10th as many humans to accomplish the same thing.
I read your post and tried to come up with an ‘exchange rate’ of my own, and it was much more difficult to do than I thought it would be before I tried it. I thought that it would be along the lines of thousands/hundreds of thousands of chimps == 1 human, as I couldn’t conceive of letting one human die in exchange for any smaller number of chimps, but then I realized that it would be much easier to think of dead chimps as an opportunity cost, and was just reacting with instinctual revulsion. This is assuming that dead chimps can’t be used (to the same extent) as live chimps to aid in medical research.)
So, what is the current value that we place on the life of a chimp?
If after m (successful) studies each using n chimps, we can save l human lives, then (assuming in worst-case that each study kills n chimps):
(mn)(The value of a chimp life in utilons) = l(The value of a human life in utilons)
So: (mn)/l = The value of a human life/The value of a chimp life
This estimate is going to be higher than in real life, as we don’t kill all the chimps used in a typical study. The difficulty would be in quantifying the number of studies necessary to save a human life, or the number of lives saved by a particular discovery.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
Have you corrected for your estimate of p(chimps are uplifted in the next fifty years)?
Edit: Okay, if it makes a difference I only realized the Planet of the Apes reference after I posted, I was making a serious point about the difference between human toddlers and chimps as it relates to the possibility of future personhood.
I hadn’t considered the possibility that chimps could/would be uplifted in the near future (50 years or mean chimp lifetime is a good rule of thumb); I think it’s entirely possible that the technology would be there, but I don’t understand the motivation for wanting to uplift chimps. I guess the reasoning is that more sapient beings == more interesting conversations, more math proofs, more works of art, so more Fun, but I’m not sure that we would want to uplift chimps if we had the technology to do so.
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk. Is it immoral to leave other species the way they are if transhumanism or FAI take off?
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk.
This seems strange to me. Can you expand on your reasoning? Uplifting seems to me to be potentially a lot simpler. The take level to identify the genes that are most responsible for human intelligence is not that much beyond our current one. And the example species you’ve used, chimps, are close enough to humans that it is likely that for at least some of those genes, simply inserting them into the chimp genome would likely substantially increase their intelligence.
Uplifting seems orders of magnitude easier than uploading at least.
I’ll concede that you are probably right about uplifting being easier.
This was my reasoning:
Properly identifying which gene encodes for what and usefully altering genes to express a particular phenotype as complex as human-level intelligence would require (in any reasonable amount of time) at the least a narrow AI to process and refine the huge amount of data in the half-chromosome or so that separates us from chimps. Chimps are close to humans, yes, but altering their DNA to uplift them seems to me to be the type of problem that would either take years of Manhattan-Project level dedication with the technology we have right now, or some sort of AI to do the heavy lifting for us.
I think I’m way out of my depth here, though, as I don’t know enough about genetic engineering or AI research to know with confidence which would be easier.
If the following is very wrong or morally abhorrent, please correct me rather than downvote. I’m trying to work it out for myself and what I came up with seems intuitively incorrect. It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
So basically, what makes us homo sapiens is our ancestry, but what makes us people is our intelligence. An alien with a brain that somehow worked exactly equivalently to ours would be our equal in every important way, but an alien with a chimp-like intelligence (one that for our purposes would essentially BE a chimp) wouldn’t. It would deserve sympathy, and it would be wrong to hurt it for no reason, but I wouldn’t value an alien-chimp’s life as highly as a human’s or an alien-human. So it seems to me that it follows that the mentally handicapped (if they indeed have chimp-like intelligences) don’t in fact deserve more moral consideration than alien-chimps or earth-chimps (ignoring their families which presumably have normal intelligences and would very much not approve of their use in experiments). If there are no safer ways to get the same results as we do from chimp studies, which I believe to be the case, then it the best option we have for now is to continue studying them. Studying the mentally handicapped would be as bad-but-acceptable but I wouldn’t advocate for it since it would be so unlikely to ever occur. Testing on the mentally handicapped seems very wrong but only for “speciesist” reasons as far as I can tell.
It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
I specified “people mentally handicapped to the point that they are equivalent to chimps.” There’s a lot of ways one can be mentally handicapped.
For the record, I’m a vegetarian. I measure morality based off median suffering/life satisfaction. Intelligence is only valuable insofar as it can improve those metrics, and certain kinds of intelligence probably result in a wider and deeper source of life satisfaction.
I don’t think chimps contribute dramatically to universal flourishing, but I’m not sure that the average human does either. I think that it’s best to have a rule “don’t harm sentient creatures”, but to occasionally turn a blind eye to certain actions that benefit us in the long term.
i.e. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing. Occasionally, doing something like that is necessary for the greater good, but most people who think their actions are sufficiently “greater good” to break the rules are wrong, so we need to discourage it in general.
This is a nice rule in principle, but in practice becomes tough. First, how do we define sentience? Second, what constitutes don’t harm? Is there an action/in-action distinction here? If is it morally unacceptable to let humans in the developing world starve do we have a similar moral obligation to chimps? If not, why not?
. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing
I’m not sure what you are talking about here. Can you expand?
This is a nice rule in principle, but in practice becomes tough.
Oh in practice it’s definitely tough. Optimal morality is tough. I judge myself and other individuals on the efforts they’ve made to improve from the status quo, not on how far they fall short of what they might hypothetically be able to accomplish with infinite computing power.
In my ideal world, suffering doesn’t happen, period, except to the degree that some amount of suffering is necessary bring about certain kinds of happiness. (i.e. everyone, animals included, gets exactly as much as they need, nothing more.
I don’t know to what extent that’s actually possible without accidentally wreaking havoc on the ecosystem and causing all kinds of problems, and in the meantime it’s easier to get public support for helping other humans anyway.
Smallpox
I’m working from old memories from middle school, and referencing what is probably a bit of a “folk version” of the real thing, but my recollection was that Edward Jenner tested his smallpox vaccine on some kid, then gave the kid a full dose of smallpox without his consent.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
I agree it wouldn’t pass muster today, but that may just be because we aren’t facing a disease as deadly as smallpox.
There’s a good moral case for experimenting on somebody without their consent IF:
1) Doing the experiment has a high probability of getting a cure into widespread use quickly
2) Getting consent for an equivalent experiment would be difficult or time-consuming
3) The disease is prevalent and serious enough that a delay to find a consenting subject is a bigger harm than the involuntary experiment.
If you think our moral concern should follow intelligence then it follows that chimps and the mentally handicapped are not morally equal to humans of normal intelligence. Depending how much differing intelligence results in differing moral consideration this could justify chimp and mentally handicapped testing.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified. It is not at all obvious that the mentally handicapped or chimpanzees suffer less than humans of normal intelligence. Nor is it obvious mentally handicapped humans and chimpanzees don’t differ in this regard. But intelligence is almost certainly not the same thing as moral value. There are possibly entities that are very intelligent but for which we would have little moral regard.
Right, that makes sense. I guess if something can suffer and notice it’s suffering and wish it weren’t suffering then it should be as morally valuable as a person...maybe.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified.
I think dogs are “capable of suffering in a way I find morally compelling” though, and I would sacrifice probably a lot of dogs to save myself or another human. Is that just me being heartless?
There are possibly entities that are very intelligent but for which we would have little moral regard.
I mentioned that the hypothetical aliens would have brains that work just like ours, not that they would be just as intelligent.
Your method should be to figure out what it is about humans that makes them morally valuable to you and then see if those traits are found in the same degree elsewhere.
True, but if it’s not the area in which Phil judges moral relevance, then I want to know why he thinks chimps and humans are different.
I was going to use the comparison “Humans born mentally handicapped to the point that their cognitive function is equivalent to chimps.” (This avoids the potential issue of “babies grow up to be average humans.”)
If you’re not willing to advocate testing on humans who are similar to chimps, I want to know why.
Along similar lines I was going to propose that it should be considered moral to test on “Low IQ Jocks as soon as they finish High School”. After all they have finished their glory years and are different to me in similar ways to how I am different to a chimpanzee. But I decided not to post because I decided it was dangerous to go anywhere near a space including “different” and “less moral consideration”.
I agree that it’s dangerous, but I think any remotely productive use of this thread is going to have to. If we’re not asking that question, we’re not asking the right questions.
I do think chimps and humans are different; but most members of PETA probably believe they are more different than I do. I think you’re reading positions into my post that aren’t there.
I advocated alternatively testing on humans like myself.
I apologize, I was focusing on a lot of the comments and missed that you had made that point.
I don’t currently know what the rules are for human testing. I think it should be theoretically possible for humans to submit themselves for whatever testing they want, but I also think that as soon as that market exists, there will be those who attempt to exploit it in ways I’d consider unethical. That’s a complex issue that I don’t have an opinion on yet.
It is not clear to me how that avoids the issue of including the future.
It avoids the issue of including the future of particular people. Some people care about that, others don’t, but it reduces the range of reasons you might object to the comparison.
From what I know, I personally weight chimps as maybe 1⁄3 times as morally significant as humans. I’m sometimes willing to sacrifice humans to save other humans, and I’d sacrifice a chimp to save about 1⁄3 as many humans. (I’d also sacrifice a human to save 3x as many chimps). This is mostly an intuitive belief. I can imagine myself changing the number to something as low as 1/10th, maybe even as low as 1/100th (I don’t expect to drop it that far).
It’s important to note, though, that I DON’T sacrifice humans on a 1-for-1 trade off without their consent. I don’t want to live in a world where someone can sacrifice me without me having a say in the matter. There may be cases where I’m willing to consent to sacrifice. I’m not sure if I can identify them right now.
There are still circumstances where, while pissed, I’d grudgingly accept that the Mastermind doing the sacrificing was right to do so. (If they had to divert a train that was going to kill a lot of people, for example. Probably more than 5 though). The number of lives saved to be worth it also has to consider how perfect the information is, and the likelihood that the sacrificer isn’t running on damaged hardware.
So theoretically, I’m okay with sacrificing chimps to save arbitrarily large numbers of people, but because the chimps CAN’T consent, I’d have to be willing to sacrifice somewhere between 1⁄3 and 1/10th as many humans to accomplish the same thing.
I read your post and tried to come up with an ‘exchange rate’ of my own, and it was much more difficult to do than I thought it would be before I tried it. I thought that it would be along the lines of thousands/hundreds of thousands of chimps == 1 human, as I couldn’t conceive of letting one human die in exchange for any smaller number of chimps, but then I realized that it would be much easier to think of dead chimps as an opportunity cost, and was just reacting with instinctual revulsion. This is assuming that dead chimps can’t be used (to the same extent) as live chimps to aid in medical research.)
So, what is the current value that we place on the life of a chimp? If after m (successful) studies each using n chimps, we can save l human lives, then (assuming in worst-case that each study kills n chimps): (mn)(The value of a chimp life in utilons) = l(The value of a human life in utilons) So: (mn)/l = The value of a human life/The value of a chimp life
This estimate is going to be higher than in real life, as we don’t kill all the chimps used in a typical study. The difficulty would be in quantifying the number of studies necessary to save a human life, or the number of lives saved by a particular discovery.
However, thinking this way, I would place my ‘exchange rate’ on the order of 200-300 chimps to 1 human life; if necessary, we should let 1 human die so that 300 chimps might live so that their value as test subjects could be used to save other humans.
I just don’t think chimps are intelligent enough to have significant lives on the same order of magnitude as that of a human’s life; I think that 1⁄3 or 1/10th of a human’s life is much too high a value.
Have you corrected for your estimate of p(chimps are uplifted in the next fifty years)?
Edit: Okay, if it makes a difference I only realized the Planet of the Apes reference after I posted, I was making a serious point about the difference between human toddlers and chimps as it relates to the possibility of future personhood.
I hadn’t considered the possibility that chimps could/would be uplifted in the near future (50 years or mean chimp lifetime is a good rule of thumb); I think it’s entirely possible that the technology would be there, but I don’t understand the motivation for wanting to uplift chimps. I guess the reasoning is that more sapient beings == more interesting conversations, more math proofs, more works of art, so more Fun, but I’m not sure that we would want to uplift chimps if we had the technology to do so.
If we had the technology to uplift a species, I think it would be likely that we had the technology to have FAI or uploaded human brains, which would be a more efficient way to have more sapient beings with which to talk. Is it immoral to leave other species the way they are if transhumanism or FAI take off?
This seems strange to me. Can you expand on your reasoning? Uplifting seems to me to be potentially a lot simpler. The take level to identify the genes that are most responsible for human intelligence is not that much beyond our current one. And the example species you’ve used, chimps, are close enough to humans that it is likely that for at least some of those genes, simply inserting them into the chimp genome would likely substantially increase their intelligence.
Uplifting seems orders of magnitude easier than uploading at least.
I’ll concede that you are probably right about uplifting being easier.
This was my reasoning: Properly identifying which gene encodes for what and usefully altering genes to express a particular phenotype as complex as human-level intelligence would require (in any reasonable amount of time) at the least a narrow AI to process and refine the huge amount of data in the half-chromosome or so that separates us from chimps. Chimps are close to humans, yes, but altering their DNA to uplift them seems to me to be the type of problem that would either take years of Manhattan-Project level dedication with the technology we have right now, or some sort of AI to do the heavy lifting for us.
I think I’m way out of my depth here, though, as I don’t know enough about genetic engineering or AI research to know with confidence which would be easier.
[Edited for typos.]
If the following is very wrong or morally abhorrent, please correct me rather than downvote. I’m trying to work it out for myself and what I came up with seems intuitively incorrect. It is also based on the idea that the mentally handicapped have chimp-like intelligence, which I don’t know to be true but is implied by your comment.
So basically, what makes us homo sapiens is our ancestry, but what makes us people is our intelligence. An alien with a brain that somehow worked exactly equivalently to ours would be our equal in every important way, but an alien with a chimp-like intelligence (one that for our purposes would essentially BE a chimp) wouldn’t. It would deserve sympathy, and it would be wrong to hurt it for no reason, but I wouldn’t value an alien-chimp’s life as highly as a human’s or an alien-human. So it seems to me that it follows that the mentally handicapped (if they indeed have chimp-like intelligences) don’t in fact deserve more moral consideration than alien-chimps or earth-chimps (ignoring their families which presumably have normal intelligences and would very much not approve of their use in experiments). If there are no safer ways to get the same results as we do from chimp studies, which I believe to be the case, then it the best option we have for now is to continue studying them. Studying the mentally handicapped would be as bad-but-acceptable but I wouldn’t advocate for it since it would be so unlikely to ever occur. Testing on the mentally handicapped seems very wrong but only for “speciesist” reasons as far as I can tell.
I specified “people mentally handicapped to the point that they are equivalent to chimps.” There’s a lot of ways one can be mentally handicapped.
For the record, I’m a vegetarian. I measure morality based off median suffering/life satisfaction. Intelligence is only valuable insofar as it can improve those metrics, and certain kinds of intelligence probably result in a wider and deeper source of life satisfaction.
I don’t think chimps contribute dramatically to universal flourishing, but I’m not sure that the average human does either. I think that it’s best to have a rule “don’t harm sentient creatures”, but to occasionally turn a blind eye to certain actions that benefit us in the long term.
i.e. the guy who invented the smallpox vaccine did something horribly unethical, which we should not allow on a regular basis, especially not today when we have more options for testing. Occasionally, doing something like that is necessary for the greater good, but most people who think their actions are sufficiently “greater good” to break the rules are wrong, so we need to discourage it in general.
This is a nice rule in principle, but in practice becomes tough. First, how do we define sentience? Second, what constitutes don’t harm? Is there an action/in-action distinction here? If is it morally unacceptable to let humans in the developing world starve do we have a similar moral obligation to chimps? If not, why not?
I’m not sure what you are talking about here. Can you expand?
Oh in practice it’s definitely tough. Optimal morality is tough. I judge myself and other individuals on the efforts they’ve made to improve from the status quo, not on how far they fall short of what they might hypothetically be able to accomplish with infinite computing power.
In my ideal world, suffering doesn’t happen, period, except to the degree that some amount of suffering is necessary bring about certain kinds of happiness. (i.e. everyone, animals included, gets exactly as much as they need, nothing more.
I don’t know to what extent that’s actually possible without accidentally wreaking havoc on the ecosystem and causing all kinds of problems, and in the meantime it’s easier to get public support for helping other humans anyway.
I’m working from old memories from middle school, and referencing what is probably a bit of a “folk version” of the real thing, but my recollection was that Edward Jenner tested his smallpox vaccine on some kid, then gave the kid a full dose of smallpox without his consent.
SOMEBODY had to try that at some point, and I think Jenner had reasonable evidence, but I don’t think that sort of thing would fly today.
I agree it wouldn’t pass muster today, but that may just be because we aren’t facing a disease as deadly as smallpox.
There’s a good moral case for experimenting on somebody without their consent IF: 1) Doing the experiment has a high probability of getting a cure into widespread use quickly 2) Getting consent for an equivalent experiment would be difficult or time-consuming 3) The disease is prevalent and serious enough that a delay to find a consenting subject is a bigger harm than the involuntary experiment.
Agreed.
Unless they have it coming! I consider it unethical to not harm sentient creatures in certain circumstances.
If you think our moral concern should follow intelligence then it follows that chimps and the mentally handicapped are not morally equal to humans of normal intelligence. Depending how much differing intelligence results in differing moral consideration this could justify chimp and mentally handicapped testing.
But while some level of intelligence does seem to be necessary for an animal to suffer in a way we find morally compelling it does not follow that abusing the slightly less intelligent is at all justified. It is not at all obvious that the mentally handicapped or chimpanzees suffer less than humans of normal intelligence. Nor is it obvious mentally handicapped humans and chimpanzees don’t differ in this regard. But intelligence is almost certainly not the same thing as moral value. There are possibly entities that are very intelligent but for which we would have little moral regard.
Right, that makes sense. I guess if something can suffer and notice it’s suffering and wish it weren’t suffering then it should be as morally valuable as a person...maybe.
I think dogs are “capable of suffering in a way I find morally compelling” though, and I would sacrifice probably a lot of dogs to save myself or another human. Is that just me being heartless?
I mentioned that the hypothetical aliens would have brains that work just like ours, not that they would be just as intelligent.
Your method should be to figure out what it is about humans that makes them morally valuable to you and then see if those traits are found in the same degree elsewhere.
I agree.