Nice post. I suspect you’ll still have to keep emphasizing that fuzziness can’t play the role of uncertainty in a human-modeling scheme (like CIRL), and is instead a way of resolving human behavior into a utility function framework. Assuming I read you correctly.
I think that there are some unspoken commitments that the framework of fuzziness makes for how to handle extrapolating irrational human behavior. If you represent fuzziness as a weighting over utility functions that gets aggregated linearly (i.e. into another utility function), this is useful for the AI making decisions but can’t be the same thing that you’re using to model human behavior, because humans are going to take actions that shouldn’t be modeled as utility maximization.
To bridge this gap from human behavior to utility function, what I’m interpreting you as implying is that you should represent human behavior in terms of a patchwork of utility functions. In the post you talk about frequencies in a simulation, where small perturbations might lead a human to care about the total or about the average. Rather than the AI creating a context-dependent model of the human, we’ve somehow taught it (this part might be non-obvious) that these small perturbations don’t matter, and should be “fuzzed over” to get a utility function that’s a weighted combination of the ones exhibited by the human.
But we could also imagine unrolling this as a frequency over time, where an irrational human sometimes takes the action that’s best for the total and other times takes the action that’s best for the average. Should a fuzzy-values AI represent this as the human acting according to different utility functions at different times, and then fuzzing over those utility functions to decide what is best?
Nice post. I suspect you’ll still have to keep emphasizing that fuzziness can’t play the role of uncertainty in a human-modeling scheme (like CIRL), and is instead a way of resolving human behavior into a utility function framework. Assuming I read you correctly.
I think that there are some unspoken commitments that the framework of fuzziness makes for how to handle extrapolating irrational human behavior. If you represent fuzziness as a weighting over utility functions that gets aggregated linearly (i.e. into another utility function), this is useful for the AI making decisions but can’t be the same thing that you’re using to model human behavior, because humans are going to take actions that shouldn’t be modeled as utility maximization.
To bridge this gap from human behavior to utility function, what I’m interpreting you as implying is that you should represent human behavior in terms of a patchwork of utility functions. In the post you talk about frequencies in a simulation, where small perturbations might lead a human to care about the total or about the average. Rather than the AI creating a context-dependent model of the human, we’ve somehow taught it (this part might be non-obvious) that these small perturbations don’t matter, and should be “fuzzed over” to get a utility function that’s a weighted combination of the ones exhibited by the human.
But we could also imagine unrolling this as a frequency over time, where an irrational human sometimes takes the action that’s best for the total and other times takes the action that’s best for the average. Should a fuzzy-values AI represent this as the human acting according to different utility functions at different times, and then fuzzing over those utility functions to decide what is best?
I’m not basing this on behaviour (because that doesn’t work, see: https://arxiv.org/abs/1712.05812 ), but on partial models.