There is widespread agreement on the status of many animals.
Only to the extent that “conscious” doesn’t carry any weight or expectation of good treatment. There is very little agreement on what an animal’s level of consciousness means in terms of life or happiness priority compared to any human.
We have a special opportunity at the start of our interactions with AI systems to decide how we’re going to relate to them
I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?
We have a lot more potential control over artificial systems than we do over natural creatures
Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree (I can’t strongly disagree either, though, because I don’t really understand what this means)
I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?
I think that our treatment of animals is a historical problem. If there were no animals, if everyone was accustomed to eating vegetarian meals, and then you introduced chickens into the world, I believe people wouldn’t be inclined to stuff them into factory farms and eat their flesh. People do care about animals where they are not complicit in harming them (whaling, dog fighting), but it is hard for most people to leave the moral herd and it is hard to break with tradition. The advantage of thinking about digital minds is that traditions haven’t been established yet and the moral herd doesn’t know what to think. There is no precedence or complicity in ill treatment. That is why it is easier for us to decide how to relate with them.
Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree.
In order to make a natural creature happy and healthy, you need to work with its basic evolution-produced physiology and psychology. You’ve got to feed it, educate it, socialize it, accommodate its arbitrary needs and neurotic tendencies. We would likely be able to design the psychology and physiology of artificial systems to our specifications. That is what I mean by having a lot more potential control.
Ah, I think we have a fundamental disagreement about how the majority of humans think about animals and each other. If the world were vegetarian, and someone created chickens, I think it would NOT lead to many chickens leading happy chicken lives. It would either be an amusing one-time lab experiment (followed by death and extinction) or the discovery that they’re darned tasty and very concentrated and portable nutrition elements, which leads to creating them for the primary purpose of food.
I’m not sure wireheading an AI (so it’s happy no matter what) is any more (or less) acceptable than doing so to chickens (by evolving smaller brains and larger breasts).
Only to the extent that “conscious” doesn’t carry any weight or expectation of good treatment. There is very little agreement on what an animal’s level of consciousness means in terms of life or happiness priority compared to any human.
I don’t follow. How is it easier (or more special as an opportunity) to decide how to relate to an AI system than to a chicken or a distant human?
Really? Given the amount of change we’ve caused in natural creatures, the amount of effort we spend in controlling/guiding fellow humans, and the difficulty in defining and measuring this aspect of ANY creature, I can’t agree (I can’t strongly disagree either, though, because I don’t really understand what this means)
I think that our treatment of animals is a historical problem. If there were no animals, if everyone was accustomed to eating vegetarian meals, and then you introduced chickens into the world, I believe people wouldn’t be inclined to stuff them into factory farms and eat their flesh. People do care about animals where they are not complicit in harming them (whaling, dog fighting), but it is hard for most people to leave the moral herd and it is hard to break with tradition. The advantage of thinking about digital minds is that traditions haven’t been established yet and the moral herd doesn’t know what to think. There is no precedence or complicity in ill treatment. That is why it is easier for us to decide how to relate with them.
In order to make a natural creature happy and healthy, you need to work with its basic evolution-produced physiology and psychology. You’ve got to feed it, educate it, socialize it, accommodate its arbitrary needs and neurotic tendencies. We would likely be able to design the psychology and physiology of artificial systems to our specifications. That is what I mean by having a lot more potential control.
Ah, I think we have a fundamental disagreement about how the majority of humans think about animals and each other. If the world were vegetarian, and someone created chickens, I think it would NOT lead to many chickens leading happy chicken lives. It would either be an amusing one-time lab experiment (followed by death and extinction) or the discovery that they’re darned tasty and very concentrated and portable nutrition elements, which leads to creating them for the primary purpose of food.
I’m not sure wireheading an AI (so it’s happy no matter what) is any more (or less) acceptable than doing so to chickens (by evolving smaller brains and larger breasts).