From reading the discussion, I get the impression that some of the commenters are writing from a position of “the prior is against significant dog intelligence, and the evidence here could have alternative explanations, so I’m skeptical”. That is, the people feel that it would be quite surprising if dogs really were this intelligent, so establishing it requires some pretty compelling evidence.
At the same time, my own feeling is more like “there’s no strong prior against significant dog intelligence, and the evidence here could have alternative explanations, so I’m being tentatively open to this being real”. As in, even if you hadn’t shown me these videos, dogs being approximately as intelligent as described would have seemed like an entirely plausible possibility to me.
If there are any people who feel like my first paragraph does describe them, I’d be curious to hear why they’re coming into this discussion with such a strong prior against dog intelligence.
If I had to articulate the reasons for my own “seems plausible” prior, they’d be something like:
a general vague sense of animal research tending to generally show that animals are smarter than people often think
some of the animals in books like “Don’t Shoot the Dog” sounding relatively smart (e.g. vaguely recall the author mentioning that training often speeds up once the animal figures out that it’s being trained, since then it can explicitly try to figure out what the trainer is trying to reward it for)
I once videotaped a beautiful Arabian mare who was being clicker-trained to prick her ears on command, so as to look alert in the show ring. She clearly knew that a click meant a handful of grain. She clearly knew her actions made her trainer click. And she knew it had something to do with her ears. But what? Holding her head erect, she rotated her ears individually: one forward, one back; then the reverse; then she flopped both ears to the sides like a rabbit, something I didn’t know a horse could do on purpose. Finally, both ears went forward at once. Click! Aha! She had it straight from then on. It was charming, but it was also sad: We don’t usually ask horses to think or to be inventive, and they seem to like to do it.
even relatively simple AI systems exhibiting surprisingly intelligent behavior (GPT-3), suggesting that there isn’t necessarily a sharp distinction between human and less-than-human intelligence
I think you hit the nail on the head here. When I was writing the article I definitely had someone with a high prior in mind, to the point where I expected people to say “so what, why wouldn’t dogs do that if you trained them”.
Sometimes people seem to put dogs closer to reflexive automatons like insects than to fellow mammals. My prior is the base affects that we feel aren’t fundamentally different between us and dogs (and most higher mammals). I’m talking about stuff like fear, excitement, generalized negative or positive affect, tiredness, sexual arousal. Even something like craving a specific food, I don’t see why it should be unique to us, given that dogs are often picky eaters and have favorite foods.
People with strong priors against dog intelligence seem to ascribe everything to anthropomorphism, and there’s often an undertone of “these people are too soft and weak, they call themselves ridiculous things like ‘dog parents’, they’ll project human baby characteristics onto a Furby if you gave them the chance”. FWIW I don’t have a dog and don’t plan to, and in my experience most dogs are fairly dumb. But to me they’re clearly a bit more than simple automatons blindly reacting to stimuli.
My priors include the idea that both animal intelligence is not that different from humans and also that humans tend to overly anthropomorphize animal cognition. The biggest misunderstandings of animal cognition are much like misunderstandings humans have of foreign cultures, often involving forms of mind projection fallacies where we assume other’s values, motivations, priorities, and perceptions are more similar (or more different) than is justified.
I’m a bit confused by people in the comments entertaining the idea that priors should influence how we interpret the magnitude of the evidence, even though when I look at the Bayes’ rule it seems to say that the magnitude of the update (how much you have to multiply the odds) is independent of what your prior was. I know it’s not that simple because sometimes the evidence itself is noisy and needs interpretation “pre-processing” before plugging it to the equation, but this “pre-processing” step should use a different prior then the one we try to update. I’m not sure how exactly this “multi-stage Bayesian update rule” should work, and I was trying to describe my struggle in my https://www.lesswrong.com/posts/JtEBjbEZidruMBKc3/are-dogs-bad
From reading the discussion, I get the impression that some of the commenters are writing from a position of “the prior is against significant dog intelligence, and the evidence here could have alternative explanations, so I’m skeptical”. That is, the people feel that it would be quite surprising if dogs really were this intelligent, so establishing it requires some pretty compelling evidence.
At the same time, my own feeling is more like “there’s no strong prior against significant dog intelligence, and the evidence here could have alternative explanations, so I’m being tentatively open to this being real”. As in, even if you hadn’t shown me these videos, dogs being approximately as intelligent as described would have seemed like an entirely plausible possibility to me.
If there are any people who feel like my first paragraph does describe them, I’d be curious to hear why they’re coming into this discussion with such a strong prior against dog intelligence.
If I had to articulate the reasons for my own “seems plausible” prior, they’d be something like:
a general vague sense of animal research tending to generally show that animals are smarter than people often think
some of the animals in books like “Don’t Shoot the Dog” sounding relatively smart (e.g. vaguely recall the author mentioning that training often speeds up once the animal figures out that it’s being trained, since then it can explicitly try to figure out what the trainer is trying to reward it for)
even relatively simple AI systems exhibiting surprisingly intelligent behavior (GPT-3), suggesting that there isn’t necessarily a sharp distinction between human and less-than-human intelligence
I think you hit the nail on the head here. When I was writing the article I definitely had someone with a high prior in mind, to the point where I expected people to say “so what, why wouldn’t dogs do that if you trained them”.
Sometimes people seem to put dogs closer to reflexive automatons like insects than to fellow mammals. My prior is the base affects that we feel aren’t fundamentally different between us and dogs (and most higher mammals). I’m talking about stuff like fear, excitement, generalized negative or positive affect, tiredness, sexual arousal. Even something like craving a specific food, I don’t see why it should be unique to us, given that dogs are often picky eaters and have favorite foods.
People with strong priors against dog intelligence seem to ascribe everything to anthropomorphism, and there’s often an undertone of “these people are too soft and weak, they call themselves ridiculous things like ‘dog parents’, they’ll project human baby characteristics onto a Furby if you gave them the chance”. FWIW I don’t have a dog and don’t plan to, and in my experience most dogs are fairly dumb. But to me they’re clearly a bit more than simple automatons blindly reacting to stimuli.
My priors include the idea that both animal intelligence is not that different from humans and also that humans tend to overly anthropomorphize animal cognition. The biggest misunderstandings of animal cognition are much like misunderstandings humans have of foreign cultures, often involving forms of mind projection fallacies where we assume other’s values, motivations, priorities, and perceptions are more similar (or more different) than is justified.
I’m a bit confused by people in the comments entertaining the idea that priors should influence how we interpret the magnitude of the evidence, even though when I look at the Bayes’ rule it seems to say that the magnitude of the update (how much you have to multiply the odds) is independent of what your prior was. I know it’s not that simple because sometimes the evidence itself is noisy and needs interpretation “pre-processing” before plugging it to the equation, but this “pre-processing” step should use a different prior then the one we try to update. I’m not sure how exactly this “multi-stage Bayesian update rule” should work, and I was trying to describe my struggle in my https://www.lesswrong.com/posts/JtEBjbEZidruMBKc3/are-dogs-bad