AIs have some property that is “human-like”, therefore, they must be treated exactly as humans
Humans aren’t permitted to make inspired art because they’re human, we’ve just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
The argument isn’t that the AI is sufficiently “human-like”, it’s just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.
I disagree that arbitrary moral consideration is okay, but I just don’t think that issue is really that relevant here.
Humans aren’t permitted to make inspired art because they’re human, we’ve just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
Well, the distinction never mattered until now, so we can’t really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct.
I disagree that arbitrary moral consideration is okay, but I just don’t think that issue is really that relevant here.
What moral consideration isn’t on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like “intelligence” simply doesn’t cut it.
Well, the distinction never mattered until now, so we can’t really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct
Even if we assume that this is some privilege granted to humans because they’re human, it doesn’t make sense to debate whether a human-like process should be granted the same privilege on account of the similar process. Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn’t necessarily have an interest no matter how similar the process is to a human process, so it doesn’t make sense to grant it the privilege.
If the algorithmic process does have an interest, then it might make sense to grant it the privilege. At that point though it would seem like such a convoluted means of adjudicating copyright laws. Also, If we’ve advanced to the point at which AI’s have actual subjective interests, I don’t think copyright laws will matter much.
What moral consideration isn’t on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like “intelligence” simply doesn’t cut it.
I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn’t necessarily have an interest no matter how similar the process is to a human process, so it doesn’t make sense to grant it the privilege.
Well, yes, that’s kind of my point. But very few people seem to go along the principle of “granting privileges to humans is fine, actually”.
I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it’s me or them, I’d rather me survive.
But very few people seem to go along the principle of “granting privileges to humans is fine, actually”.
Because you’re using “it’s fine to arbitrarily prioritize humans morally” as the justification for this privilege. At least that’s how I’m understanding you.
If you told me it’s okay to smash a statue in the shape of a human, because “it’s okay to arbitrarily grant humans the privilege of not being smashed, on account of their essence of humanness, and although this statue has some human qualities, it’s okay to smash it because it doesn’t have the essence of humanness”
I would take issue with your reasoning, even though I wouldn’t necessarily have a moral problem with you smashing the statue. I would also just be very confused about why that type of reasoning would be relevant in this case. I would take issue with you smashing an elephant because it isn’t a human.
I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it’s me or them, I’d rather me survive.
I’m sure there are also humans that you cannot possibly coexist with.
I’m also just saying that’s the point at which it would make sense to start morally considering an art generator. But even so, I reject the idea that the moral permissibility of creating art is based on some privilege granted to those containing some essential trait.
I don’t think the moral status of a process will ever be relevant to the question of whether art made from that process meets some standard of originality sufficient to repel accusations of copyright infringement.
Because you’re using “it’s fine to arbitrarily prioritize humans morally” as the justification for this privilege. At least that’s how I’m understanding you.
I think it’s fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I’m not sure I can give a better feel for it than “I’ll know it when I see it”, and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth.
I’m sure there are also humans that you cannot possibly coexist with.
Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.
Humans aren’t permitted to make inspired art because they’re human, we’ve just decided not to consider art as plagiarized beyond a certain threshold of abstraction and inspiration.
The argument isn’t that the AI is sufficiently “human-like”, it’s just that the process by which AI makes art is considered sufficiently similar to a process we already consider permissible.
I disagree that arbitrary moral consideration is okay, but I just don’t think that issue is really that relevant here.
Well, the distinction never mattered until now, so we can’t really say what have we been doing. Now it matters how we interpret our previous intent, because these two things have suddenly become distinct.
What moral consideration isn’t on some level arbitrary? Why is this or that value a better inherent indicator of worth than just being human at all? I think even if your goal is to just understand better and formalize human moral intuitions, then obviously something like “intelligence” simply doesn’t cut it.
Even if we assume that this is some privilege granted to humans because they’re human, it doesn’t make sense to debate whether a human-like process should be granted the same privilege on account of the similar process. Humans would be granted the privilege because they have an interest in what the privilege grants. An algorithmic process doesn’t necessarily have an interest no matter how similar the process is to a human process, so it doesn’t make sense to grant it the privilege.
If the algorithmic process does have an interest, then it might make sense to grant it the privilege. At that point though it would seem like such a convoluted means of adjudicating copyright laws. Also, If we’ve advanced to the point at which AI’s have actual subjective interests, I don’t think copyright laws will matter much.
I think the capacity to experience qualitative states of consciousness, (e.g. suffering, wellbeing) is what should be considered when allocating moral consideration.
Well, yes, that’s kind of my point. But very few people seem to go along the principle of “granting privileges to humans is fine, actually”.
I disagree, I can imagine entities who experience such states and that I still cannot possibly coexist with. And if it’s me or them, I’d rather me survive.
Because you’re using “it’s fine to arbitrarily prioritize humans morally” as the justification for this privilege. At least that’s how I’m understanding you.
If you told me it’s okay to smash a statue in the shape of a human, because “it’s okay to arbitrarily grant humans the privilege of not being smashed, on account of their essence of humanness, and although this statue has some human qualities, it’s okay to smash it because it doesn’t have the essence of humanness”
I would take issue with your reasoning, even though I wouldn’t necessarily have a moral problem with you smashing the statue. I would also just be very confused about why that type of reasoning would be relevant in this case. I would take issue with you smashing an elephant because it isn’t a human.
I’m sure there are also humans that you cannot possibly coexist with.
I’m also just saying that’s the point at which it would make sense to start morally considering an art generator. But even so, I reject the idea that the moral permissibility of creating art is based on some privilege granted to those containing some essential trait.
I don’t think the moral status of a process will ever be relevant to the question of whether art made from that process meets some standard of originality sufficient to repel accusations of copyright infringement.
I think it’s fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I’m not sure I can give a better feel for it than “I’ll know it when I see it”, and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth.
Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
Let me suggest a candidate larger fuzzy class:
”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.