I also mentioned Clever Hans, and you made a good point in response. Rather than sound like I am motte-and-baileying you, I will say that I was using “Clever Hans” irresponsibly imprecisely as a stand-in for more issues than were present in the Clever Hans case.
I’ve updated in the direction of “I’ll eventually need to reconsider my relationship with my dog” but still expect a lot of these research threads to come apart through a combination of
Subconscious cues from trainers—true Clever Hans effects (dogs are super clued in to us thanks to selection pressure, in ways we don’t naturally detect)
Experiment design that has obvious holes in it (at first)
Experiment design that has subtle holes in it (once the easy problems are dealt with)
Alternative explanations, of experimentally established hole-free results, from professional scientists (once the field becomes large enough to attract widespread academic attention). Like, yes, you unambiguously showed experimental result x, which you attributed to p, which would indeed explain x, but q is an equally plausible explanation which your experiment does not differentiate against.
This is based on a model of lay science that tends to show these patterns, because lay science tends to be a “labor of love” that makes it harder to detect one’s own biases.
Specifically on the volunteer-based projects, I expect additional issues with:
Selection effects in the experimentees (only unusually smart/perceptive/responsive/whatever dogs will make it past the first round of training; the others will have owners who get bored from lack of results and quit)
Selection effects in the experimenters (only certain types of people will even be aware of this research; only exceptionally talented dog trainers will stick with the program because training intelligent dogs takes so much f-ing patience, much less training dumber dogs)
There may be lines of research that conclusively establish some surprising things about dog intelligence, and I look forward to any such surprisal. But I’m going to wait until the dust settles more—and until there are more published papers because I have to work a lot harder to understand technical information conveyed by video—before engaging with the research.
While I haven’t done a rigorous study of the effect, my gut feeling is that the vast majority of suggestive and interesting phenomena eventually fall apart due to these exact reasons.
This is why I do not give much thought to this sort of stuff.
Ideally, I’d do a rigorous study of initially-interesting-but-later-fell-apart-studies or find someone else who has and then maybe I’d be better able to spend my cognitive resources...
I also mentioned Clever Hans, and you made a good point in response. Rather than sound like I am motte-and-baileying you, I will say that I was using “Clever Hans” irresponsibly imprecisely as a stand-in for more issues than were present in the Clever Hans case.
I’ve updated in the direction of “I’ll eventually need to reconsider my relationship with my dog” but still expect a lot of these research threads to come apart through a combination of
Subconscious cues from trainers—true Clever Hans effects (dogs are super clued in to us thanks to selection pressure, in ways we don’t naturally detect)
Experiment design that has obvious holes in it (at first)
Experiment design that has subtle holes in it (once the easy problems are dealt with)
Alternative explanations, of experimentally established hole-free results, from professional scientists (once the field becomes large enough to attract widespread academic attention). Like, yes, you unambiguously showed experimental result x, which you attributed to p, which would indeed explain x, but q is an equally plausible explanation which your experiment does not differentiate against.
This is based on a model of lay science that tends to show these patterns, because lay science tends to be a “labor of love” that makes it harder to detect one’s own biases.
Specifically on the volunteer-based projects, I expect additional issues with:
Selection effects in the experimentees (only unusually smart/perceptive/responsive/whatever dogs will make it past the first round of training; the others will have owners who get bored from lack of results and quit)
Selection effects in the experimenters (only certain types of people will even be aware of this research; only exceptionally talented dog trainers will stick with the program because training intelligent dogs takes so much f-ing patience, much less training dumber dogs)
There may be lines of research that conclusively establish some surprising things about dog intelligence, and I look forward to any such surprisal. But I’m going to wait until the dust settles more—and until there are more published papers because I have to work a lot harder to understand technical information conveyed by video—before engaging with the research.
While I haven’t done a rigorous study of the effect, my gut feeling is that the vast majority of suggestive and interesting phenomena eventually fall apart due to these exact reasons.
This is why I do not give much thought to this sort of stuff.
Ideally, I’d do a rigorous study of initially-interesting-but-later-fell-apart-studies or find someone else who has and then maybe I’d be better able to spend my cognitive resources...