Artificial reinforcement learning (RL) is a widely used technique in artificial intelligence that provides a general method for training agents to perform a wide variety of behaviours. RL as used in computer science has striking parallels to reward and punishment learning in animal and human brains. I argue that present-day artificial RL agents have a very small but nonzero degree of ethical importance. This is particularly plausible for views according to which sentience comes in degrees based on the abilities and complexities of minds, but even binary views on consciousness should assign nonzero probability to RL programs having morally relevant experiences. While RL programs are not a top ethical priority today, they may become more significant in the coming decades as RL is increasingly applied to industry, robotics, video games, and other areas. I encourage scientists, philosophers, and citizens to begin a conversation about our ethical duties to reduce the harm that we inflict on powerless, voiceless RL agents.
“Do Artificial Reinforcement-Learning Agents Matter Morally?”, Tomasik 2014:
Particularly germane is the discussion of the disunified nature of cognition and experimental demonstrations of learning still happening while the hippocampus is knocked out by an anesthetic-like drug like “When Memory Fails, Intuition Reigns: Midazolam Enhances Implicit Inference in Humans”.