Disclaimer—For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let’s say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.
Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.
If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).
Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become “suffering”. (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also… killer robots with no pain or fear of death :-P
Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn’t mean they won’t get distracted and lose focus on fighting when they’re injured or in danger. It means that they won’t avoid getting injured or killed. It’s a lot easier to kill someone if they don’t mind it if you succeed.
Suffering and AIs
Disclaimer—For the sake of argument this post will treat utilitarianism as true, although I do not neccesarily think that
One future moral issue is that AIs may be created for the purpose of doing things that are unpleasant for humans to do. Let’s say an AI is designed with the ability to have pain, fear, hope and pleasure of some kind. It might be reasonable to expect in such cases the unpleasant tasks might result in some form of suffering. Added to this problem is the fact that a finite lifespan and an approaching termination/shutdown might cause fear, another form of suffering. Taking steps to shut down an AI would also then become morally unacceptable, even though they perform an activity that might be useless or harmful. Because of this, we might face a situation where we cannot shutdown AIs even when there is good reason to.
Basically, if suffering AIs were some day extremely common, we would be introducing a massive amount of suffering into the world, which under utilitarianism is unacceptable. Even assuming some pleasure is created, we might search for ways to create that pleasure without creating the pain.
If so, would it make sense to adopt a principle of AI design that says AIs should be designed so it (1) does not suffer or feel pain (2) should not fear death/shutdown (eg. views own finite life as acceptable). This would minimise suffering (potentially you could also attempt to maximise happiness).
Potential issues with this: (1) Suffering might be in some way relative, so that a neutral lack of pleasure/happiness might become “suffering”. (2) Pain/suffering might be useful to create a robot with high utility, and thus some people may reject this principle. (3) I am troubled by this utilitarian approach I have used here as it seems to justify tiliing the universe with machines whose only purpose and activity is to be permanently happy for no reason. (4) Also… killer robots with no pain or fear of death :-P
Killer robots with no pain or fear of death would be much easier to fight off than ones who have pain and fear of death. It doesn’t mean they won’t get distracted and lose focus on fighting when they’re injured or in danger. It means that they won’t avoid getting injured or killed. It’s a lot easier to kill someone if they don’t mind it if you succeed.
True! I was actually trying to be funny in (4), tho apparently I need more work.