I tend to use ‘sentience’ to separate animal-like things which can sense their environment from plant-like things which can’t; and ‘sapience’ to separate human-like things which can think abstractly from critter-like things which can’t. At the least, that’s the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so.
If I may ask, what quality are you describing with the word ‘sentience’?
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
What you care about is not obviously the same thing as what is valuable to you. What’s valuable is a confusing question that you shouldn’t be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn’t necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I’m applying this observation to a decision to provisionally adopt certain moral principles).
To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn’t enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?
I tend to use ‘sentience’ to separate animal-like things which can sense their environment from plant-like things which can’t; and ‘sapience’ to separate human-like things which can think abstractly from critter-like things which can’t. At the least, that’s the approach that was in the back of my mind as I wrote the initial post. By these definitions, a paperclipper AI would have to be both sentient, in order to be sufficiently aware of its environment to create paperclips, and sapient, to think of ways to do so.
If I may ask, what quality are you describing with the word ‘sentience’?
Probably the same thing people mean when they say “consciousness”. At least, that’s the common usage I’ve seen.
I’m thinking of having feelings. I care about many critter-like things which can’t think abstractly, but do feel. But just having senses is not enough for me.
What you care about is not obviously the same thing as what is valuable to you. What’s valuable is a confusing question that you shouldn’t be confident in knowing a solution to. You may provisionally decide to follow some moral principles (for example in order to be able to exercise consequentialism more easily), but making a decision doesn’t necessitate being anywhere close to being sure of its correctness. The best decision that you can make may still in your estimation be much worse than the best theoretically possible decision (here, I’m applying this observation to a decision to provisionally adopt certain moral principles).
To use a knowingly-inaccurate analogy: a layer of sensory/instinctual lizard brain isn’t enough, a layer of thinking human brain is irrelevant, but a layer of feeling mammalian brain is just right?
Sounds about right, given the inaccurate biology.