Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
I’m not saying you can’t ever ask questions about ultimate values
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?
Hmm, you present a convincing case, but the result seems to me to be a paradox.
On the one hand, we can’t ask about ultimate values or ultimate criteria or whatever in an unconditioned ‘one place’ way; we always need to assume some set of criteria or values in order to productively frame the question.
On the other hand, if we end up saying that human beings can’t ever sensibly ask questions about ultimate criteria or values, then we’ve gone off the rails.
I don’t quite know what to say about that.
I’m not saying you can’t ever ask questions about ultimate values, just that there isn’t some objective moral code wired into the fabric of the universe that would apply to all possible mind-designs. Any moral code we come up with, we have to do so with our own brains, and that’s okay. We’re also going to judge it with our own brains, since that’s where our moral intuitions live.
“The human value function” if there is such a thing is very complex, weighing tons of different parameters. Some things seem to vary between individuals a bit, but some are near universal within the human species, such as the proposition that killing is bad and we should avoid it when possible.
When wishing on a genie, you probably don’t want to just ask for everyone to feel pleasure all the time because, even though people like feeling pleasure, they probably don’t want to be in an eternal state of mindless bliss with no challenge or more complex value. That’s because the “human value function” is very complex. We also don’t know it. It’s essentially a black box where we can compute a value on outcomes and compare them, but we don’t really know all the factors involved. We can infer things about it from patterns in the outcomes though, which is how we can come up with generalities such as “killing is bad”.
So after all this discussion, what question would you actually want to ask the genie? You probably don’t want to change your values drastically, so maybe you just want to find out what they are?
It’s an interesting course of thought. Thanks for starting the discussion.
Wait, why not? How would asking about ultimate values or criteria work, if we need to assume some value or criterion in order to productively deliberate?