Concerning #3: yeah, I’m currently thinking that you need to make some more assumptions. But, I’m not sure I want to make assumptions about resources. I think there may be useful assumptions related to the way the hypotheses are learned—IE, we expect hypotheses with nontrivial weight to have a lot of agreement because they are candidate generalizations of the same data, which makes it somewhat hard to entirely dissatisfy some while satisfying others. This doesn’t seem quite helpful enough, but, perhaps something in that direction.
In any case, I agree that it seems interesting to explore assumptions about the mutual satisfiability of different value functions.
“resources” is more of shorthand for “the best utility function looks like a smoothmin of a subset of the different features. Given that assumption, the best fuzzy approximation looks like a smoothmin of all the features, with different weights”.
Concerning #3: yeah, I’m currently thinking that you need to make some more assumptions. But, I’m not sure I want to make assumptions about resources. I think there may be useful assumptions related to the way the hypotheses are learned—IE, we expect hypotheses with nontrivial weight to have a lot of agreement because they are candidate generalizations of the same data, which makes it somewhat hard to entirely dissatisfy some while satisfying others. This doesn’t seem quite helpful enough, but, perhaps something in that direction.
In any case, I agree that it seems interesting to explore assumptions about the mutual satisfiability of different value functions.
“resources” is more of shorthand for “the best utility function looks like a smoothmin of a subset of the different features. Given that assumption, the best fuzzy approximation looks like a smoothmin of all the features, with different weights”.