I would agree that we’re often in positions where we’re forced to choose between two things that we value and we just don’t know how to make that choice.
Sometimes, as you say, it’s because we don’t know how to compare the two. (Talk of numerical comparison is, I think, beside the point.) Sometimes it’s because we can’t accept giving up something of value, even in exchange for something of greater value. Sometimes it’s for other reasons.
I would agree that coming up with a way to evaluate possible states of the world that take into account all of the things humans value is very difficult. This is true whether the evaluation is by means of a utility function for fAI or via some other means. It’s a hard problem.
I would agree that replacing the hard-to-compute value function(s) I actually have with some other value function(s) that are easier to compute is not desirable.
Building an automated system that can compute the hard-to-compute value function(s) I actually have more reliably than my brain can—for example, a system that can evaluate various possible states of the world and predict which ones would actually make me satisfied and fulfilled to live in, and be right more often than I am—sounds pretty desirable to me. I have no more desire to make that calculation with my brain, given better alternatives, than I have to calculate square roots of seven-digit numbers with it.
I would agree that we’re often in positions where we’re forced to choose between two things that we value and we just don’t know how to make that choice.
Sometimes, as you say, it’s because we don’t know how to compare the two. (Talk of numerical comparison is, I think, beside the point.)
Sometimes it’s because we can’t accept giving up something of value, even in exchange for something of greater value.
Sometimes it’s for other reasons.
I would agree that coming up with a way to evaluate possible states of the world that take into account all of the things humans value is very difficult. This is true whether the evaluation is by means of a utility function for fAI or via some other means. It’s a hard problem.
I would agree that replacing the hard-to-compute value function(s) I actually have with some other value function(s) that are easier to compute is not desirable.
Building an automated system that can compute the hard-to-compute value function(s) I actually have more reliably than my brain can—for example, a system that can evaluate various possible states of the world and predict which ones would actually make me satisfied and fulfilled to live in, and be right more often than I am—sounds pretty desirable to me. I have no more desire to make that calculation with my brain, given better alternatives, than I have to calculate square roots of seven-digit numbers with it.