As I’m sure you agree, there’s a sense in which humans don’t have values, just things like thoughts and behaviors. My impression is that much of the confusion around these topics comes from the idea that going from a system with thoughts and behaviors to a description of the system’s “true values” is a process that doesn’t itself involve human moral judgment.
ETA: You could arguably see a Clippy as being a staple-maximizer that was congenitally stuck not reflecting upon an isolated implicit belief that whatever maximized paperclips also maximized staples. So if you said Clippy was “helped” more by humans making paperclips than by humans making staples, that would, I think, be a human moral judgment, and one that might be more reasonable to make about some AIs that output Clippy behaviors than about other AIs that output Clippy behaviors, depending on their internal structure. Or if that question has a non-morally-charged answer, then how about the question in the parent comment, whether humans “really” are egoists or “really” are altruists with an implicit belief that other people’s experiences aren’t as real? I could see neuroscience results arguing for one side or the other, but I think the question of what exact neuroscience results would argue for which answer is a morally loaded one. Or I could be confused.
there’s a sense in which humans don’t have values, just things like thoughts and behaviors.
Between values, thoughts, and behaviors, it seems like the larger gap is between behaviors on the one hand, and thoughts and values on the other. Given a neurological description of a human being, locating “thoughts” in that description would seem roughly comparable in difficulty to locating “values” therein. Not that I take this to show there are neither thoughts nor values. Such a conclusion would likely indicate overly narrow definitions of a thought and a value.
I could see neuroscience results arguing for one side or the other, but I think the question of what exact neuroscience results would argue for which answer is a morally loaded one.
As I’m sure you agree, there’s a sense in which humans don’t have values, just things like thoughts and behaviors. My impression is that much of the confusion around these topics comes from the idea that going from a system with thoughts and behaviors to a description of the system’s “true values” is a process that doesn’t itself involve human moral judgment.
ETA: You could arguably see a Clippy as being a staple-maximizer that was congenitally stuck not reflecting upon an isolated implicit belief that whatever maximized paperclips also maximized staples. So if you said Clippy was “helped” more by humans making paperclips than by humans making staples, that would, I think, be a human moral judgment, and one that might be more reasonable to make about some AIs that output Clippy behaviors than about other AIs that output Clippy behaviors, depending on their internal structure. Or if that question has a non-morally-charged answer, then how about the question in the parent comment, whether humans “really” are egoists or “really” are altruists with an implicit belief that other people’s experiences aren’t as real? I could see neuroscience results arguing for one side or the other, but I think the question of what exact neuroscience results would argue for which answer is a morally loaded one. Or I could be confused.
Between values, thoughts, and behaviors, it seems like the larger gap is between behaviors on the one hand, and thoughts and values on the other. Given a neurological description of a human being, locating “thoughts” in that description would seem roughly comparable in difficulty to locating “values” therein. Not that I take this to show there are neither thoughts nor values. Such a conclusion would likely indicate overly narrow definitions of a thought and a value.
I think so too.