The question is really “why does the AI have that exact limit”. Phrased in terms of classes, it’s “why does the AI have that specific class”; having another class that includes it doesn’t count, since it doesn’t have the same limit.
After significant reflection what I’m trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).
Furthermore, there are a lot of edge cases of humanity where people can’t learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn’t matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
But the original argument is that we shouldn’t eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can’t be specified or controlled in detail, so we have to worry how the AI would treat us.
If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn’t show a real problem—if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn’t use that as a basis to mistreat us.
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.
Though I’ve spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I’m probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.
The question is really “why does the AI have that exact limit”. Phrased in terms of classes, it’s “why does the AI have that specific class”; having another class that includes it doesn’t count, since it doesn’t have the same limit.
After significant reflection what I’m trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).
Furthermore, there are a lot of edge cases of humanity where people can’t learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn’t matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
But the original argument is that we shouldn’t eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can’t be specified or controlled in detail, so we have to worry how the AI would treat us.
If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn’t show a real problem—if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn’t use that as a basis to mistreat us.
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.
Though I’ve spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I’m probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.