no reason to expect that our specific instance of human values would be worst-case (this in particular is unlikely, since nobody is choosing human values adversarially with this problem in mind);
The point of why UGC matters is that one can use it to show that for many instances simply approximating is tough. This together with (more widely believed) conjectures suggests that for many of those problems the average instance is almost as hard as the hardest instances. (I agree however with most of the rest of your comment.)
This is pushing the limits of my knowledge of the subject. Actually formalizing this is pretty difficult, and actually making such claims mathematically precise is currently open. See e.g. here(pdf), and there’s been more technical work like this(pdf). Right now, the main reasons for believing it are essentially empirical: that for most natural NP-complete problems, the average cases look to be about as hard as the worst cases. Given that one would therefore expect the same thing with the approximation problems. One direction for formalization is to use variants of the exponential time hypothesis, but one needs in this context to then distinguish between “the average is hard” and “random instances are hard.”
It is possible to use padding arguments to construct NP-complete problems where most random instances are fairly easy, but the set of hard instances of given size still has to have grow faster than any polynomial in the length of the input, or one would NP in P/Poly (by hard coding the solutions to the rare instances) and that’s strongly believed not to happen.
The point of why UGC matters is that one can use it to show that for many instances simply approximating is tough. This together with (more widely believed) conjectures suggests that for many of those problems the average instance is almost as hard as the hardest instances. (I agree however with most of the rest of your comment.)
Can you elaborate on how we go from “approximating is hard” to “the average instance is hard”? (Or point me to somewhere I can read up on it?)
This is pushing the limits of my knowledge of the subject. Actually formalizing this is pretty difficult, and actually making such claims mathematically precise is currently open. See e.g. here(pdf), and there’s been more technical work like this(pdf). Right now, the main reasons for believing it are essentially empirical: that for most natural NP-complete problems, the average cases look to be about as hard as the worst cases. Given that one would therefore expect the same thing with the approximation problems. One direction for formalization is to use variants of the exponential time hypothesis, but one needs in this context to then distinguish between “the average is hard” and “random instances are hard.”
It is possible to use padding arguments to construct NP-complete problems where most random instances are fairly easy, but the set of hard instances of given size still has to have grow faster than any polynomial in the length of the input, or one would NP in P/Poly (by hard coding the solutions to the rare instances) and that’s strongly believed not to happen.