I think it’s reasonable for a higher-power individual to create the best model they can of the lower-power individual, and update that model diligently upon gaining any new information that it had predicted the subject imperfectly
I think that’s reasonable too, but for moral/legal discussions, “reasonable” is a difficult standard to apply. The majority of humans are unreasonable on at least some dimensions, and a lot of humans are incapable of modeling others particularly well. And there are a lot of humans who are VERY hard to model, because they really aren’t motivated the way we expect they “should” be, and “what they want” is highly indeterminate. Young children very often fall into this category.
What’s the minimum amount of fidelity a model should have before abandonment is preferred? I don’t know.
I think that’s reasonable too, but for moral/legal discussions, “reasonable” is a difficult standard to apply. The majority of humans are unreasonable on at least some dimensions, and a lot of humans are incapable of modeling others particularly well. And there are a lot of humans who are VERY hard to model, because they really aren’t motivated the way we expect they “should” be, and “what they want” is highly indeterminate. Young children very often fall into this category.
What’s the minimum amount of fidelity a model should have before abandonment is preferred? I don’t know.