There rings a certain absurdity to the phrase “anthropomorphizing humans”: of course it’s not a problem, they’re already anthropomorphic.
My understanding, at this point, is that you are well aware of this, and are enjoying it, but do not consider it an actual argument in the context of the broader discussion. That is, you are remarking on the absurdity of the phrase, not the absurdity of the notion. Is that correct?
I suppose I worry that people will see the absurdity, but misattribute it. When the question is whether a model of a complex thinking, feeling, goal oriented agent is appropriate to some entities we label human in other respects, and someone says “I have interacted with such entities, and the complex model seems to fit”, it is not at all absurd to point out that we’re overeager to apply the model in cases it clearly doesn’t actually fit.
There are perhaps a few things going on here.
There rings a certain absurdity to the phrase “anthropomorphizing humans”: of course it’s not a problem, they’re already anthropomorphic.
My understanding, at this point, is that you are well aware of this, and are enjoying it, but do not consider it an actual argument in the context of the broader discussion. That is, you are remarking on the absurdity of the phrase, not the absurdity of the notion. Is that correct?
I suppose I worry that people will see the absurdity, but misattribute it. When the question is whether a model of a complex thinking, feeling, goal oriented agent is appropriate to some entities we label human in other respects, and someone says “I have interacted with such entities, and the complex model seems to fit”, it is not at all absurd to point out that we’re overeager to apply the model in cases it clearly doesn’t actually fit.
Correct.