Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places.
At which point does the mother stop counting lexically more than the dog?
Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages).
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI?
By definition, it would not, because if it did, then it would be an Unfriendly AI.
If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?
Ok, let’s do some basic friendly AI theory: Would a friendly AI discount the welfare of “weaker” beings as you and me (compared to this hyper-agent) lexically? Could that possibly be a fAI? If not, then I think we should also rethink our moral behaviour towards weaker beings in our game here for our decisions can result in bad things for them correspondingly.
My bad about the ritual. Thanks. Out of interest about your preferences: Imagine the grandmother and the dog next to each other. A perfect scientist starts to exchange pairs of atoms (let’s assume here that both individuals contain the same amount of atoms) so that the grandmother more and more transforms into the dog (of course there will be several weird intermediary stages). For the scientist knows his experiment very well, none of the objects will die; in the end it’ll look like the two objects changed their places. At which point does the mother stop counting lexically more than the dog? Sometimes continuity arguments can be defeated by saying: “No I don’t draw an arbitrary line; I adjust gradually whereas in the beginning I care a lot about the grandmother and in the end just very little about the remaining dog.” But I think that this argument doesn’t work here for we deal with a lexical prioritization. How would you act in such a scenario?
You can ask the same question with the grandmother turning into a tree instead of into a dog.
Identity isn’t in specific atoms. The effect of swapping a carbon atom in the grandma with a carbon atom in the dog is none at all.
Jiro’s response shows one good reason why I don’t find that thought experiment very interesting. Another obvious reason is its extreme implausibility and, I strongly suspect, actual incoherence (given what we know about physics and biology). I think I can safely say “I have no idea what I would prefer”, much like Eliezer finds no reason to answer how he would explain his arm being turned into a blue tentacle, and not have that be counted against me.
On to FAI theory:
By definition, it would not, because if it did, then it would be an Unfriendly AI.
How do you get from facts about the behavior of an FAI to claims about how we should act? I spy one of those pesky “is-ought” transitions that bedeviled Hume!
Corollary: why should we care that our behavior results in bad things for animals? Isn’t that the question in the first place, and doesn’t your statement beg said question?