The problem I intend to work through in this short (hopefully) post is that of moral considerability in regard to artificial intelligence systems. This is not a problem that we should wait for advanced AI to consider and it doesn’t even really hinge on “advanced” AI anyways so it doesn’t really matter. The question is “what gives an entity moral worth?” and my general position is that current arguments center too much on consciousness (which is too loosely defined) and qualia (which is not testable, provable, or even guessable).
I’ll start with Peter Singer’s position that the ability to feel pain is what gives entities moral worth. Given his influence on the EA community, I imagine his view is not unpopular here. Singer uses the development of a central nervous system to qualify some animals (but not clams, etc.) as deserving of consideration. This is fine for promoting animal welfare, but it’s not hard to build a program that quantifies “pain” and avoids it as we do. We obviously don’t want to give this program moral worth, so this system doesn’t hold. Setting the bar too low isn’t just inaccurate, but turns basically every form of utilitarianism against us humans because it’s cheaper and easier to help simulated entities, so we don’t want that.
Some might say ”yeah sure but if we can look in the system code and see that this code block is actually just weighted tendencies towards actions that promote x value and away from actions that reduce x value, then that’s not true pain” as if there’s something in consciousness that is not weighted tendencies. Also worth noting that if something had weights but couldn’t read its weights, it would probably “think” of consciousness as intuition. This is the cartesian dualist position (that thought is not material matter and numbers), but I think that in order for our actions to have any effect on whether or not AI develops something that fits our definition of consciousness, this must not be the case. In other words, if the dualists are right then whether it happens or not is outside of our power. There are also many serious retorts to this line of thinking, notably the position that “consciousness” may very well be simply the best way for unthinking matter to plan things out, etc.
The obvious response is “well, sure that program avoids pain or whatever, but it doesn’t actually feel the feeling of pain, so that doesn’t count.” This is consciousness from qualia. Famous because of its improvability and the Problem of Other Minds, this sucks because there’s really no way to even get any information on this one. My favorite, and from what I can tell, the strongest argument against solipsists is the “best answer” argument (that because the people around you act like you, they’re probably not animatronics, and if they are then there’s gotta be some other people who set them up. So, if it acts like you and doesn’t spark when you throw water on it, most likely it has the same makeup as you do). Anyways, yeah the problem is we can’t detect qualia at all. The best answer argument doesn’t really still work when we create bots specifically to act like we do. We can electrocute dead tissue, but we still can’t tell if it does anything.
Most people think that this weird experience we all seem to share is what makes it not okay to kick each other’s shins. As we progress, we will probably be able to create programs to pass bars set like higher-order theories of consciousness, neural theories of consciousness w full brain emulations, or even quantum theories of consciousness, and right now we all just kind of have this idea of these many fluttering thoughts in our heads and a collection of game save files and a bounded group of actions and interests and call it ourselves. In order for any of our moral theories to not absolutely flip on us in a few years when some prominent EA decides to put $1 million into cloud services to run a billion digital people living in bliss, we need to come up with a higher bar for consciousness.
AI, Consciousness, and the problem of Moral Considerability
The problem I intend to work through in this short (hopefully) post is that of moral considerability in regard to artificial intelligence systems. This is not a problem that we should wait for advanced AI to consider and it doesn’t even really hinge on “advanced” AI anyways so it doesn’t really matter. The question is “what gives an entity moral worth?” and my general position is that current arguments center too much on consciousness (which is too loosely defined) and qualia (which is not testable, provable, or even guessable).
I’ll start with Peter Singer’s position that the ability to feel pain is what gives entities moral worth. Given his influence on the EA community, I imagine his view is not unpopular here. Singer uses the development of a central nervous system to qualify some animals (but not clams, etc.) as deserving of consideration. This is fine for promoting animal welfare, but it’s not hard to build a program that quantifies “pain” and avoids it as we do. We obviously don’t want to give this program moral worth, so this system doesn’t hold. Setting the bar too low isn’t just inaccurate, but turns basically every form of utilitarianism against us humans because it’s cheaper and easier to help simulated entities, so we don’t want that.
Some might say ”yeah sure but if we can look in the system code and see that this code block is actually just weighted tendencies towards actions that promote x value and away from actions that reduce x value, then that’s not true pain” as if there’s something in consciousness that is not weighted tendencies. Also worth noting that if something had weights but couldn’t read its weights, it would probably “think” of consciousness as intuition. This is the cartesian dualist position (that thought is not material matter and numbers), but I think that in order for our actions to have any effect on whether or not AI develops something that fits our definition of consciousness, this must not be the case. In other words, if the dualists are right then whether it happens or not is outside of our power. There are also many serious retorts to this line of thinking, notably the position that “consciousness” may very well be simply the best way for unthinking matter to plan things out, etc.
The obvious response is “well, sure that program avoids pain or whatever, but it doesn’t actually feel the feeling of pain, so that doesn’t count.” This is consciousness from qualia. Famous because of its improvability and the Problem of Other Minds, this sucks because there’s really no way to even get any information on this one. My favorite, and from what I can tell, the strongest argument against solipsists is the “best answer” argument (that because the people around you act like you, they’re probably not animatronics, and if they are then there’s gotta be some other people who set them up. So, if it acts like you and doesn’t spark when you throw water on it, most likely it has the same makeup as you do). Anyways, yeah the problem is we can’t detect qualia at all. The best answer argument doesn’t really still work when we create bots specifically to act like we do. We can electrocute dead tissue, but we still can’t tell if it does anything.
Most people think that this weird experience we all seem to share is what makes it not okay to kick each other’s shins. As we progress, we will probably be able to create programs to pass bars set like higher-order theories of consciousness, neural theories of consciousness w full brain emulations, or even quantum theories of consciousness, and right now we all just kind of have this idea of these many fluttering thoughts in our heads and a collection of game save files and a bounded group of actions and interests and call it ourselves. In order for any of our moral theories to not absolutely flip on us in a few years when some prominent EA decides to put $1 million into cloud services to run a billion digital people living in bliss, we need to come up with a higher bar for consciousness.
Put ideas in the comments. Thanks.