I understand what you are getting at, but am not convinced.
I think a more charitable reading would be something along the lines of:
Given that the way human minds work we ascribe complex motives and personalities to animals (and fire, and objects, and weather) so often that we have a name for it (“anthropomorphism”), why do we expect that a similar thing isn’t happening here?
Similar in what way? Presented with your “more charitable” reading, I would still think the writer was suggesting anthropomorphism is still the problem in this instance.
Also, it might be relevant to my reading that I often caution against anthropomorphizing humans.
There rings a certain absurdity to the phrase “anthropomorphizing humans”: of course it’s not a problem, they’re already anthropomorphic.
My understanding, at this point, is that you are well aware of this, and are enjoying it, but do not consider it an actual argument in the context of the broader discussion. That is, you are remarking on the absurdity of the phrase, not the absurdity of the notion. Is that correct?
I suppose I worry that people will see the absurdity, but misattribute it. When the question is whether a model of a complex thinking, feeling, goal oriented agent is appropriate to some entities we label human in other respects, and someone says “I have interacted with such entities, and the complex model seems to fit”, it is not at all absurd to point out that we’re overeager to apply the model in cases it clearly doesn’t actually fit.
I think the analogy only holds if “anthropomorphizing” is the problem in both cases.
I understand what you are getting at, but am not convinced.
I think a more charitable reading would be something along the lines of:
Similar in what way? Presented with your “more charitable” reading, I would still think the writer was suggesting anthropomorphism is still the problem in this instance.
Also, it might be relevant to my reading that I often caution against anthropomorphizing humans.
There are perhaps a few things going on here.
There rings a certain absurdity to the phrase “anthropomorphizing humans”: of course it’s not a problem, they’re already anthropomorphic.
My understanding, at this point, is that you are well aware of this, and are enjoying it, but do not consider it an actual argument in the context of the broader discussion. That is, you are remarking on the absurdity of the phrase, not the absurdity of the notion. Is that correct?
I suppose I worry that people will see the absurdity, but misattribute it. When the question is whether a model of a complex thinking, feeling, goal oriented agent is appropriate to some entities we label human in other respects, and someone says “I have interacted with such entities, and the complex model seems to fit”, it is not at all absurd to point out that we’re overeager to apply the model in cases it clearly doesn’t actually fit.
Correct.