We’re not talking about sapience though, we’re talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don’t understand the distinctions you’re making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are—I’d rather eat entities with less complex qualia over those with more, if I have to choose. But I don’t think the differences are that strong.
That is a very different moral position than the one I hold. I’m curious what your moral intuitions about the qualia of reinforcement learning systems say to you. Have you considered that many machine learning systems seem to have systems which would compute qualia much like a nervous system, and that such systems are indeed more complex than the nervous systems of many living creatures like jellyfish?
I don’t know what to think about all that. I don’t know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans—certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods—has some sort of internal experience. I’m less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it’s hard to say.
I don’t feel comfortable speculating which AIs have qualia, or if any do at all—I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can’t imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)
We’re not talking about sapience though, we’re talking about sentience. Why does the ability to think have any moral relevance? Only possessing qualia, being able to suffer or have joy, is relevant, and most animals likely possess that. I don’t understand the distinctions you’re making in your other comment. There is one, binary distinction that matters: is there something it is like to be this thing, or is there not? If yes, its life is sacred, if no, it is an inanimate object. The line seems absolutely clear to me. Eating fish or shrimp is bad for the same reasons that eating cows or humans is. They are all on the exact same moral level to me. The only meaningful dimension of variation is how complex their qualia are—I’d rather eat entities with less complex qualia over those with more, if I have to choose. But I don’t think the differences are that strong.
That is a very different moral position than the one I hold. I’m curious what your moral intuitions about the qualia of reinforcement learning systems say to you. Have you considered that many machine learning systems seem to have systems which would compute qualia much like a nervous system, and that such systems are indeed more complex than the nervous systems of many living creatures like jellyfish?
I don’t know what to think about all that. I don’t know how to determine what the line is between having qualia and not. I just feel certain that any organism with a brain sufficiently similar to those of humans—certainly all mammals, birds, reptiles, fish, cephalopods, and arthropods—has some sort of internal experience. I’m less sure about things like jellyfish and the like. I suppose the intuition probably comes from the fact that the entities I mentioned seem to actively orient themselves in the world, but it’s hard to say.
I don’t feel comfortable speculating which AIs have qualia, or if any do at all—I am not convinced of functionalism and suspect that consciousness has something to do with the physical substrate, primarily because I can’t imagine how consciousness can be subjectively continuous (one of its most fundamental traits in my experience!) in the absence of a continuously inhabited brain (rather than being a program that can be loaded in and out of anything, and copied endlessly many times, with no fixed temporal relation between subjective moments.)