Ian: the issue isn’t whether it could determine what humans want, but whether it would care. That’s what Eliezer was talking about with the “difference between chess pieces on white squares and chess pieces on black squares” analogy. There are infinitely many computable quantities that don’t affect your utility function at all. The important job in FAI is determining how to create an intelligence that will care about the things we care about.
Certainly it’s necessary for such an intelligence to be able to compute it, but it’s certainly not sufficient.
Ian: the issue isn’t whether it could determine what humans want, but whether it would care. That’s what Eliezer was talking about with the “difference between chess pieces on white squares and chess pieces on black squares” analogy. There are infinitely many computable quantities that don’t affect your utility function at all. The important job in FAI is determining how to create an intelligence that will care about the things we care about.
Certainly it’s necessary for such an intelligence to be able to compute it, but it’s certainly not sufficient.