Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.
The problem with that is that, for any being that can’t clearly and unambiguously report its experiences of mirror self-recognition to us (nonhuman animals generally—there are claims of language use, but those would be considered controversial, to put it mildly, if used as evidence here) we have to guess whether or not the animal recognized itself based on its behaviors and their apparent relevance to the matter. It’s necessarily an act of interpretation. Humans frequently mistake other humans for simpler, less-reflective beings than is actually the case due to differences of habit, perception and behavior—simply because they don’t react in expected ways based on the same stimulus.
Human children have been subjected to the mirror test and passed or failed based on whether they tried to remove a sticker from their own faces. It should not be difficult to list alternative reasons for why a child wouldn’t remove a sticker from their face. I find myself wondering if these researchers remember ever having been children before...
Sure, there’s some ambiguity there, but over adequately large sample sizes, trends become evident. Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.
Sure, there’s some ambiguity there, but over adequately large sample sizes, trends become evident.
That is a general defense of the concept of statistical analysis. It doesn’t have anything to do with my point.
Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.
It’s pretty damn slow about correcting for pervasive biases in the researcher population, though. There’s a reason we talk about science advancing funeral-by-funeral.
Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.
The problem with that is that, for any being that can’t clearly and unambiguously report its experiences of mirror self-recognition to us (nonhuman animals generally—there are claims of language use, but those would be considered controversial, to put it mildly, if used as evidence here) we have to guess whether or not the animal recognized itself based on its behaviors and their apparent relevance to the matter. It’s necessarily an act of interpretation. Humans frequently mistake other humans for simpler, less-reflective beings than is actually the case due to differences of habit, perception and behavior—simply because they don’t react in expected ways based on the same stimulus.
Human children have been subjected to the mirror test and passed or failed based on whether they tried to remove a sticker from their own faces. It should not be difficult to list alternative reasons for why a child wouldn’t remove a sticker from their face. I find myself wondering if these researchers remember ever having been children before...
Sure, there’s some ambiguity there, but over adequately large sample sizes, trends become evident. Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.
That is a general defense of the concept of statistical analysis. It doesn’t have anything to do with my point.
It’s pretty damn slow about correcting for pervasive biases in the researcher population, though. There’s a reason we talk about science advancing funeral-by-funeral.