Do you think that an AI that does not take into account the preferences of cows is necessarily unFriendly (using EY’s definition)?
If I remember correctly, EY talks about Friendliness in regards to humanity, not in regards to cows—in that case the AI would take the preferences of cows into account only to the extent that the Coherent Extrapolated Volition of humanity would take it into account, no more, no less.
If yes, I don’t understand why you think it is acceptable to eat beef.
For the sake of not pretending to misunderstand you I’ll assume you mean “I don’t understand why you think it’s acceptable to kill cows in order to have their meat.”, we’re not talking about already butchered cow whose meat would go to waste if I didn’t eat it.
For starters, because cow-meat is yummy, and the preferences of humans severely outweigh the preferences of cows in my mind.
Now dolphin-meat or ape-meat, I would not eat, and I would like to to ban the killing of dolphins and apes both (outside of medical testing in the cases of apes).
I think that’s a classic example of mind-projection fallacy.
That’s such a weird interpretation of what I’m saying, because I’ve consistently acknowledged that blicket is not written in the laws of physics.
This means less than you seem to think, because after all concepts like “brains” or “genes” or for that matter even “atoms” and “molecules” aren’t written in the laws of physics either. So all I got from this statement of yours is that you think moral isn’t located at the most fundamental level of reality (the one occupied by quantum amplitute configurations)
And to counteract this out you made statemens like “it’s a property of the creature.”
Sexy(me, Jennifer Aniston) != sexy(me, Brad Pitt). Isn’t some of that difference attributable to different properties of Jennifer and Brad?
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Such miscommunication could have been avoided if you were a bit more precise in your sentences.
You seem to think that nothing would be wrong with a FAI simulating an AI that wanted to die.
Not quite. I’ve effectively said that it wouldn’t necessarily be wrong.
But an AI does not lack blicket simply because it wants to die.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Let me remind people again that I’m not talking about the sort of “wanting to die” that a suicidal human being would possess—driven by grief or despair or guilt or hopeless tedium grinding down his soul.
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Is primeness a property of a heap of five pebbles?
And is it a property of you or the pebbles that you don’t care about prime-pebbled heaps?
Okay, but my own view on the matter is that “blicket” is a continuum—most properties of creatures, both physical and mental, are continuums after all. Creatures probably range from having zero blickets (amoebas) to a couple blickets (reptiles) to lots of blickets (apes, dolphins) to us (the current maximum of blickets).
How is this use of the term different from the term “moral concern”? I’m trying to talk about creatures we give sufficient moral weight that the type of justifications for their treatment change. Killing cows takes different (and lesser) justification than killing humans.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Is it fair to say that you don’t think it makes any moral difference whether you made the AI or found it instead?
If I remember correctly, EY talks about Friendliness in regards to humanity, not in regards to cows—in that case the AI would take the preferences of cows into account only to the extent that the Coherent Extrapolated Volition of humanity would take it into account, no more, no less.
For the sake of not pretending to misunderstand you I’ll assume you mean “I don’t understand why you think it’s acceptable to kill cows in order to have their meat.”, we’re not talking about already butchered cow whose meat would go to waste if I didn’t eat it.
For starters, because cow-meat is yummy, and the preferences of humans severely outweigh the preferences of cows in my mind.
Now dolphin-meat or ape-meat, I would not eat, and I would like to to ban the killing of dolphins and apes both (outside of medical testing in the cases of apes).
This means less than you seem to think, because after all concepts like “brains” or “genes” or for that matter even “atoms” and “molecules” aren’t written in the laws of physics either. So all I got from this statement of yours is that you think moral isn’t located at the most fundamental level of reality (the one occupied by quantum amplitute configurations)
And to counteract this out you made statemens like “it’s a property of the creature.”
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Such miscommunication could have been avoided if you were a bit more precise in your sentences.
Not quite. I’ve effectively said that it wouldn’t necessarily be wrong.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Let me remind people again that I’m not talking about the sort of “wanting to die” that a suicidal human being would possess—driven by grief or despair or guilt or hopeless tedium grinding down his soul.
Is primeness a property of a heap of five pebbles?
And is it a property of you or the pebbles that you don’t care about prime-pebbled heaps?
How is this use of the term different from the term “moral concern”? I’m trying to talk about creatures we give sufficient moral weight that the type of justifications for their treatment change. Killing cows takes different (and lesser) justification than killing humans.
Is it fair to say that you don’t think it makes any moral difference whether you made the AI or found it instead?