The two scenarios have equal utility to me, as close as I can tell. The paperclipper (and the many more than one copies of itself it would make) would be minds optimized for creating and maintaining paperclips (Though maybe it would kill itself off to create more paperclips eventually?) and would not be sentient. In contrast to you, I think I care about sentience, not sapience. To the very small extent that I saw the paperclipper has a person, rather than a force of clips, I would wish it ill, but only in a half-hearted way, which wouldn’t scale to disutility for every paperclip it successfully created.
I tend to use ‘sentience’ to separate animal-like things which can sense their environment from plant-like things which can’t; and ‘sapience’ to separate human-like things which can think abstractly from critter-like things which can’t. At the least, that’s the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so.
If I may ask, what quality are you describing with the word ‘sentience’?
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
What you care about is not obviously the same thing as what is valuable to you. What’s valuable is a confusing question that you shouldn’t be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn’t necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I’m applying this observation to a decision to provisionally adopt certain moral principles).
To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn’t enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I’m not sure to what extent I would be glad some creature was happy, or to what extent I’d be mad at it for killing everyone else, though.
The two scenarios have equal utility to me, as close as I can tell. The paperclipper (and the many more than one copies of itself it would make) would be minds optimized for creating and maintaining paperclips (Though maybe it would kill itself off to create more paperclips eventually?) and would not be sentient. In contrast to you, I think I care about sentience, not sapience. To the very small extent that I saw the paperclipper has a person, rather than a force of clips, I would wish it ill, but only in a half-hearted way, which wouldn’t scale to disutility for every paperclip it successfully created.
I tend to use ‘sentience’ to separate animal-like things which can sense their environment from plant-like things which can’t; and ‘sapience’ to separate human-like things which can think abstractly from critter-like things which can’t. At the least, that’s the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so.
If I may ask, what quality are you describing with the word ‘sentience’?
Probably the same thing people mean when they say “consciousness”. At least, that’s the common usage I’ve seen.
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
What you care about is not obviously the same thing as what is valuable to you. What’s valuable is a confusing question that you shouldn’t be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn’t necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I’m applying this observation to a decision to provisionally adopt certain moral principles).
To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn’t enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?
Sounds about right, given the inaccurate biology.
How about a sentient AI whose utility function is orthogonal to yours? You care nothing about anything it cares about and it cares about nothing you care about. Also, would you call such an AI sentient?
You said it was sentient, so of course I would call it sentient. I would either value that future, or disvalue it. I’m not sure to what extent I would be glad some creature was happy, or to what extent I’d be mad at it for killing everyone else, though.