This is an interesting response; mine is of the opposite valence. To me, this doesn’t feel too dissimilar from something my cousin-who-is-into-pyramid-schemes would send me. I believe that this post has:
Large claims that are not evidence supported
Mirages of evidence that do not meaningfully constitute such
Cursory dismissal of potential concerns
Claims that set off alarm bells to me in this post include:
Your Dog is Even Smarter Than You Think
Epistemic status: highly suggestive.
There’s a revolution going on and you’re sleeping on it.
her dog started to display capabilities for rudimentary syntax
Once your dog gets the hang of it, you’re able to add more buttons faster, but it’s never quick. Dogs take a while to come up with a response (they’re bright, but they’re not humans), and you can’t force your dog to learn, so you have to work together and find motivation (for the dog and for yourself!). And not every pet has a strong desire to communicate.
Bunny is creative with the limited button vocabulary available to her and tries to use words in novel ways to communicate: “stranger paw” for splinter in her paw, “sound settle” for shut up, “poop play” for fart, “paw” to refer to owner’s hand. … Bunny knows each of her doggy friends by name, thinks about them when they’re not there, asks where they are, requests to play with them. … Bunny understands times of day like today, morning, afternoon, night. … And can recall what time of day she went to the park. … Bunny is quite obsessed over her bowel movements (how Freudian) and about her owners’ poop cycle. … Bunny communicates emotional states like mad, happy, concerned. And “ugh”. … Bunny wants to know what and why is a “dog”. … And whether Mom used to be a dog. And she can recognize herself in the mirror.
To me, these failed to be supported by more than what I think is cursory evidence:
1. The first three bullets are not explicitly supported, but are presumably supported by the rest of the article. Besides the support I quote and address after this, key supporting evidence seems to be: - Under Stella: Explanation of a language learning system for autistic youth, the qualifications of the woman who precipitated this exploration, video of a dog seemingly pressing the buttons “bed”, “all done”, “come”, “outside”, and video of the dog seemingly pressing the buttons “help”, “good”, “want”, “eat”. -- I believe the videos are meant to be what constitutes evidence in this section.
There are some aspects of the videos that I think lend them some credibility: The dog uses the same paw to hit each button (seems more deliberate), approaches the pad slowly (seems more deliberate), may be looking at each button prior to pressing (ambiguous, but possibly lends credibility), and to my untrained eye it seems as if the video was indeed taken in one shot.
I also think there are aspects to the video aren’t compelling: If I were to create a board of what appears to be 40 general words, I’d imagine that I could assign meanings to many, perhaps most, random combinations. The meanings and word combinations portrayed here seem at least somewhat unlikely to be of high utility nor continuous thought. Why would a dog want to tell its owner “bed” “all done”, and why would an owner want to know that? Dogs tend to wake up and fall asleep quickly and frequently throughout the day. They don’t tend to have a nap time and difficulty waking up as if a toddler. There’s also no need for “come” to be paired with “outside”, “outside” is enough to request a walk. I tend to believe that simplicity would dominate here, and the complexity and interpretation necessary for these begs my credulity. The second word pairing, “help” “good” “want” “eat” really doesn’t have an obvious meaning from my perspective. “Help” “want” “eat” is more clear, for example, or “help” “eat”. This, to me, feels more likely to have been reading in to a random combination (at least when considering the first two words separately from the second two) than one coherent thought or expression. When trying to present evidence of a ~talking dog, I would expect there to be many, many videos of more plausible expression; these two as the leading evidence feels particularly questionable. I don’t have any reason to think that these videos are doctored, but FWIW, it would seemingly be easy to replace the audio (or control it remotely) to say whatever is desired.
I planned to go through this post point by point, but am finding myself wanting to move on for time efficiency reasons (I’ve also switched to an anonymous username given time constraints and associated limits to my presentation of this argument). I’ll quickly cover the remainder of the post:
- The Bunny section presents two types of evidence: videos and links to ongoing academic studies (without results). The videos are particularly not compelling, much less so than Stella’s. These videos show large gaps in time between button presses, uses of different paws, not looking at the buttons, word combinations that do not obviously have meaning, seeming disinterest from the dog, and many instances of multiple shots such that you’re trusting that they comprise one, rather than multiple timelines. I really struggle to find anything compelling about these. The links to ongoing studies particularly pattern match to me for those who try to fein credibility; an ongoing study without results is not an indicator of there being a positive result.
The Koko section does not present any evidence, and the honorable mentions section presents more videos, which I didn’t review for time efficiency reasons.
---
I also find that there was only cursory dismissal of potential concerns, which I why I was quite surprised to see your opposite take:
> quite appreciated the epistemic status woven throughout the post (i.e. concerns about Clever Hans, the steps attempted at addressing it, an the current status of how the jury is still out on some studies)
The only mention of Clever Hans merely says that the researcher is “well aware” of it. I don’t see any discussion of attempts to address it. Regarding studies, the jury appears to still be out on ALL studies. I think this is partially recognized but also understated in the article; the only relevant quote is:
> She has partnered with researches from University of California, San Diego to have several cameras looking at the button pad running 24⁄7, for them to do more rigorous analysis. Presumably there’s an actual paper on the way.
On the whole, this being curated (and the size of upvotes) has been one significant contributor to my feeling as though the state of critical readership and response on LW is much worse than what my prior was prior to more closely engaging lately. This article in particular feels not too dissimilar from something I could imagine on e.g. Buzzfeed; it just says some big things with very little substantive evidence and some maneuvers that seem most commonly used to weakly mask the lack of credibility of the argument. I’d love to hear your response; I think there is some likelihood that I’ll update or at least better understand this forum’s readership.
I’m not sure that we disagree much about how likely Bunny is to be doing complex language. My primary takeaway was not “this dog can talk”, it’s “man, we really should be checking more comprehensively whether dogs can talk.” I tried to be pretty clear about that in the curation notice.
I think early stage science looks more like messing around than like rigorous studies. I think you need to do a lot of messing around before you get to a point where you have something to rigorously study. My curation of this is a celebration of checking things and being curious, not an endorsement of the theory.
I think I do agree that the post’s title and opening epistemic status are too strong (and yeah I think it was a mistake not to include that in the curation notice. I’ll edit it to do so).
I don’t think it’s fair to say my dismissal of concerns is “cursory” if you include my comments under the post. Maybe the article itself didn’t go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it’s not easy to preempt every criticism.
As for cursory evidence, yes it’s mostly that, but cursory evidence can still be good Bayesian evidence. I think there’s enough to conclude there’s something interesting going on.
Are the vids even real?
For starters, all of this hinges on videos being done in good faith. If it’s all creative editing of pets’ random walks (heh) over the board, then of course we should dismiss everything out of the gate.
For Stella IIRC all of the interesting stuff is on Instagram @hunger4words, so I only had those two YouTube vids. I agree they’re not the best for leading evidence.
Please watch this video even if you have time constraints (it works fine at 1.5x speed).
She shows (excessive IMO) humility and defers to who she considers experts.
Considers herself a “hopeful skeptic”, when Bunny does something unexpectedly smart, she still wonders if it’s just coincidence (at 4:05). Also she namedrops Skinner and Chomsky 😄
Makes a point to put “Talking” in scare quotes in her video titles.
Tells a realistic story, where it took “a few weeks” for Bunny to learn just a single “Outside” button, takes “a thousand tiny reinforcements” to keep learning. And shows many examples of how she does the training.
Explains how she teaches abstract concepts like “Love you” and acknowledges it’s not the same concept to the dog, but it has “an affectionate meaning”.
In many videos we see Bunny take a looong time to respond. The dog goes away from the board (to “think” presumably) and later comes back with an answer. Those parts are sometimes cut out but usually just sped up. If it’s all fake, why include that?
In some Bunny videos we see random “tantrums”. Which shows what a truly random walk sounds like and if she’s selling us a bridge, why include that in the videos?
“Conversations” are mostly very mundane and doglike and she doesn’t show any truly amazing feats of intellect. She doesn’t even try to teach the dog to count!
Claims to have cameras constantly pointed at the board for research, and indeed in many of her clips there are lower quality parts shot from a constant angle. That is consistent with “something interesting happened but she wasn’t filming at the time”. A big tell of fake/staged videos is that someone just happened to be filming at that exact moment despite nothing seeming to prompt that.
On the balance of evidence, Alexis doesn’t look like someone who’s trying very hard to convince you of her magic talking dog to sell you $250 online dog communication courses. And don’t say “Amazon affiliate links”, even Scott has done that.
But a bigger part of why I updated towards “there’s something there” is that there are several people who recreated this. Of course it’s possible that every one of them is also fake, but that would be a bigger reach. Or it could be that it’s easy to delude yourself and overinterpret pet output, but then the videos are still in good faith and that’s what we’re determining here.
Here’s a video of Billi the cat where she repeatedly and consistently refuses food. Which is the opposite of what usually happens, the owner tries to railroad her and she still says no.
As with Bunny, you can see that the cat takes forever to respond.
If it was just clever training to always respond yes to food with no understanding, why did this happen?
It smells like Buzzfeed and I’m disappointed in LW
It kind of does, but that wasn’t the model. What I had in the back of my mind is “if Eliezer gets to do it, then I get to do it too”. I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn’t go too far off-balance.
I didn’t consciously go for any “maneuvers” to misrepresent things. IMO the only actually iffy part is the revolution line (steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that’s a revolution in communication).
And I think I hedged my claims pretty well. This stuff is highly suggestive, my position is “hey, despite the trappings of looking like fake viral videos, you should look at this because it’s more interesting than it looks at first glance”. I expect that we’ll learn something interesting, but I don’t have any certainty about how much. Maybe after rigorous analysis we’ll see that dogs do only rudimentary communication and the rest is confirmation bias. Maybe we’ll learn something more surprising.
Normie bias
To me, this doesn’t feel too dissimilar from something my cousin-who-is-into-pyramid-schemes would send me.
This article in particular feels not too dissimilar from something I could imagine on e.g. Buzzfeed; it just says some big things with very little substantive evidence and some maneuvers that seem most commonly used to weakly mask the lack of credibility of the argument.
I expected more complaints of this kind, so I was pleasantly surprised. I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird “nerd woo”. If we’re to be good rationalist we have to recognize that most evidence isn’t neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can’t just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.
Minor complaints
To be honest, a few of the reasons you decide the evidence is “not compelling” are pretty weird. Why does it matter if the dog uses one paw or both paws? Why is it weird that a dog has a “bed time”? What is “seeming disinterest” from the dog and what makes you think you can see that? Why do you expect dogs to strive for brevity and being “more clear”?
I appreciate your response, and my apologies that for time-efficiency reasons I’m only going to respond briefly and to some parts of it.
I don’t think it’s fair to say my dismissal of concerns is “cursory” if you include my comments under the post. Maybe the article itself didn’t go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it’s not easy to preempt every criticism.
I’m somewhat sympathetic to this. I do feel as though given large claims e.g. “revolutionary” and the definite rather than the hedge in the title, it was worth doing more than the cursory in the article itself. I haven’t read your comments nor looked at the timing of them, but I imagine some to most readers read the article without seeing these comments. I’m saddened that those readers likely had much too strong a takeaway and upvoted this post.
As for cursory evidence, yes it’s mostly that, but cursory evidence can still be good Bayesian evidence. I think there’s enough to conclude there’s something interesting going on.
This stuff is highly suggestive,
I agree with the first and not with the second. I think this is lightly suggestive and I strongly suspect LWers who accept this level of evidence as highly suggestive will have some pretty inaccurate models of the world. For example, I do think most mommy-blogger, or pyramid-scheme, etc. things we see all over social media present similar, if not typically higher, levels of evidence.
What I had in the back of my mind is “if Eliezer gets to do it, then I get to do it too”.
I’m somewhat new to this community, so FWIW, while I certainly know who Eliezer is and have read some of his stuff, I don’t understand this reference.
I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn’t go too far off-balance.
I find this quite disappointing, and would have expected the LW community to be better.
I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird “nerd woo”. If we’re to be good rationalist we have to recognize that most evidence isn’t neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can’t just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.
I don’t necessarily disagree with this, but I do think the arguments for AI alignment and cryonics have been much more thoughtfully presented, with approximately appropriate calibration.
steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that’s a revolution in communication).
For dogs at least, there’s a threshold beyond which this would have to reach, to me, to start to become true (same with the title; the behaviors shown don’t necessarily point to me updating my priors). I’ve had three dogs, each of which had clear indicators for wanting to go out (e.g. pawing at the outside door, showing excitement when I asked) and wanting food.
I didn’t consciously go for any “maneuvers” to misrepresent things.
FWIW I absolutely believe this, and the rest of your points e.g. about the videos are well-taken. Thank you for your thoughtful response.
EDIT:
> Please watch this video even if you have time constraints (it works fine at 1.5x speed).
I’m not sure I understand why this was recommended; it didn’t seem notable to me and is more of a lets-feel-good-about-this video than anything.
I am super-duper surprised she says it took a few weeks to teach the Outside button! It took about… 15 minutes to teach my dog to use her Food bell. And then the Outside bell and Treat bells were similarly fast. I don’t think button pressing is inherently harder than bell ringing, so that shouldn’t make a difference.
I guess if the dog was starting at zero training it would take two weeks. (Robin already knew how to Target an item, which she learned after learning hand Touch, which she learned as part of the process of teaching how clicker-like training with positive reinforcers works in the first place. )
I can imagine abstract words like “Tomorrow” and “Where” taking a whole lot longer, but the words that are just ways to obtain concrete things are extremely easy to teach. Outside bells are a very well-known and frequently-done thing. Look them up on Amazon and you’ll see about 20 options for sale.
This is an interesting response; mine is of the opposite valence. To me, this doesn’t feel too dissimilar from something my cousin-who-is-into-pyramid-schemes would send me. I believe that this post has:
Large claims that are not evidence supported
Mirages of evidence that do not meaningfully constitute such
Cursory dismissal of potential concerns
Claims that set off alarm bells to me in this post include:
To me, these failed to be supported by more than what I think is cursory evidence:
1. The first three bullets are not explicitly supported, but are presumably supported by the rest of the article. Besides the support I quote and address after this, key supporting evidence seems to be:
- Under Stella: Explanation of a language learning system for autistic youth, the qualifications of the woman who precipitated this exploration, video of a dog seemingly pressing the buttons “bed”, “all done”, “come”, “outside”, and video of the dog seemingly pressing the buttons “help”, “good”, “want”, “eat”.
-- I believe the videos are meant to be what constitutes evidence in this section.
There are some aspects of the videos that I think lend them some credibility: The dog uses the same paw to hit each button (seems more deliberate), approaches the pad slowly (seems more deliberate), may be looking at each button prior to pressing (ambiguous, but possibly lends credibility), and to my untrained eye it seems as if the video was indeed taken in one shot.
I also think there are aspects to the video aren’t compelling: If I were to create a board of what appears to be 40 general words, I’d imagine that I could assign meanings to many, perhaps most, random combinations. The meanings and word combinations portrayed here seem at least somewhat unlikely to be of high utility nor continuous thought. Why would a dog want to tell its owner “bed” “all done”, and why would an owner want to know that? Dogs tend to wake up and fall asleep quickly and frequently throughout the day. They don’t tend to have a nap time and difficulty waking up as if a toddler. There’s also no need for “come” to be paired with “outside”, “outside” is enough to request a walk. I tend to believe that simplicity would dominate here, and the complexity and interpretation necessary for these begs my credulity. The second word pairing, “help” “good” “want” “eat” really doesn’t have an obvious meaning from my perspective. “Help” “want” “eat” is more clear, for example, or “help” “eat”. This, to me, feels more likely to have been reading in to a random combination (at least when considering the first two words separately from the second two) than one coherent thought or expression. When trying to present evidence of a ~talking dog, I would expect there to be many, many videos of more plausible expression; these two as the leading evidence feels particularly questionable. I don’t have any reason to think that these videos are doctored, but FWIW, it would seemingly be easy to replace the audio (or control it remotely) to say whatever is desired.
I planned to go through this post point by point, but am finding myself wanting to move on for time efficiency reasons (I’ve also switched to an anonymous username given time constraints and associated limits to my presentation of this argument). I’ll quickly cover the remainder of the post:
- The Bunny section presents two types of evidence: videos and links to ongoing academic studies (without results). The videos are particularly not compelling, much less so than Stella’s. These videos show large gaps in time between button presses, uses of different paws, not looking at the buttons, word combinations that do not obviously have meaning, seeming disinterest from the dog, and many instances of multiple shots such that you’re trusting that they comprise one, rather than multiple timelines. I really struggle to find anything compelling about these. The links to ongoing studies particularly pattern match to me for those who try to fein credibility; an ongoing study without results is not an indicator of there being a positive result.
The Koko section does not present any evidence, and the honorable mentions section presents more videos, which I didn’t review for time efficiency reasons.
---
I also find that there was only cursory dismissal of potential concerns, which I why I was quite surprised to see your opposite take:
> quite appreciated the epistemic status woven throughout the post (i.e. concerns about Clever Hans, the steps attempted at addressing it, an the current status of how the jury is still out on some studies)
The only mention of Clever Hans merely says that the researcher is “well aware” of it. I don’t see any discussion of attempts to address it. Regarding studies, the jury appears to still be out on ALL studies. I think this is partially recognized but also understated in the article; the only relevant quote is:
> She has partnered with researches from University of California, San Diego to have several cameras looking at the button pad running 24⁄7, for them to do more rigorous analysis. Presumably there’s an actual paper on the way.
On the whole, this being curated (and the size of upvotes) has been one significant contributor to my feeling as though the state of critical readership and response on LW is much worse than what my prior was prior to more closely engaging lately. This article in particular feels not too dissimilar from something I could imagine on e.g. Buzzfeed; it just says some big things with very little substantive evidence and some maneuvers that seem most commonly used to weakly mask the lack of credibility of the argument. I’d love to hear your response; I think there is some likelihood that I’ll update or at least better understand this forum’s readership.
I’m not sure that we disagree much about how likely Bunny is to be doing complex language. My primary takeaway was not “this dog can talk”, it’s “man, we really should be checking more comprehensively whether dogs can talk.” I tried to be pretty clear about that in the curation notice.
I think early stage science looks more like messing around than like rigorous studies. I think you need to do a lot of messing around before you get to a point where you have something to rigorously study. My curation of this is a celebration of checking things and being curious, not an endorsement of the theory.
I think I do agree that the post’s title and opening epistemic status are too strong (and yeah I think it was a mistake not to include that in the curation notice. I’ll edit it to do so).
I don’t think it’s fair to say my dismissal of concerns is “cursory” if you include my comments under the post. Maybe the article itself didn’t go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it’s not easy to preempt every criticism.
As for cursory evidence, yes it’s mostly that, but cursory evidence can still be good Bayesian evidence. I think there’s enough to conclude there’s something interesting going on.
Are the vids even real?
For starters, all of this hinges on videos being done in good faith. If it’s all creative editing of pets’ random walks (heh) over the board, then of course we should dismiss everything out of the gate.
For Stella IIRC all of the interesting stuff is on Instagram @hunger4words, so I only had those two YouTube vids. I agree they’re not the best for leading evidence.
Please watch this video even if you have time constraints (it works fine at 1.5x speed).
She shows (excessive IMO) humility and defers to who she considers experts.
Considers herself a “hopeful skeptic”, when Bunny does something unexpectedly smart, she still wonders if it’s just coincidence (at 4:05). Also she namedrops Skinner and Chomsky 😄
Makes a point to put “Talking” in scare quotes in her video titles.
Tells a realistic story, where it took “a few weeks” for Bunny to learn just a single “Outside” button, takes “a thousand tiny reinforcements” to keep learning. And shows many examples of how she does the training.
Explains how she teaches abstract concepts like “Love you” and acknowledges it’s not the same concept to the dog, but it has “an affectionate meaning”.
In many videos we see Bunny take a looong time to respond. The dog goes away from the board (to “think” presumably) and later comes back with an answer. Those parts are sometimes cut out but usually just sped up. If it’s all fake, why include that?
In some Bunny videos we see random “tantrums”. Which shows what a truly random walk sounds like and if she’s selling us a bridge, why include that in the videos?
“Conversations” are mostly very mundane and doglike and she doesn’t show any truly amazing feats of intellect. She doesn’t even try to teach the dog to count!
Claims to have cameras constantly pointed at the board for research, and indeed in many of her clips there are lower quality parts shot from a constant angle. That is consistent with “something interesting happened but she wasn’t filming at the time”. A big tell of fake/staged videos is that someone just happened to be filming at that exact moment despite nothing seeming to prompt that.
On the balance of evidence, Alexis doesn’t look like someone who’s trying very hard to convince you of her magic talking dog to sell you $250 online dog communication courses. And don’t say “Amazon affiliate links”, even Scott has done that.
But a bigger part of why I updated towards “there’s something there” is that there are several people who recreated this. Of course it’s possible that every one of them is also fake, but that would be a bigger reach. Or it could be that it’s easy to delude yourself and overinterpret pet output, but then the videos are still in good faith and that’s what we’re determining here.
Here’s a video of Billi the cat where she repeatedly and consistently refuses food. Which is the opposite of what usually happens, the owner tries to railroad her and she still says no.
As with Bunny, you can see that the cat takes forever to respond.
If it was just clever training to always respond yes to food with no understanding, why did this happen?
Ok if vids are real, it’s still all Clever Hans
I’ll just link to a few comments of mine on that:
Simple button use is expected by induction, danger of over-interpreting
It can’t be classic Clever Hans if owner doesn’t know the right answer
It smells like Buzzfeed and I’m disappointed in LW
It kind of does, but that wasn’t the model. What I had in the back of my mind is “if Eliezer gets to do it, then I get to do it too”. I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn’t go too far off-balance.
I didn’t consciously go for any “maneuvers” to misrepresent things. IMO the only actually iffy part is the revolution line (steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that’s a revolution in communication).
And I think I hedged my claims pretty well. This stuff is highly suggestive, my position is “hey, despite the trappings of looking like fake viral videos, you should look at this because it’s more interesting than it looks at first glance”. I expect that we’ll learn something interesting, but I don’t have any certainty about how much. Maybe after rigorous analysis we’ll see that dogs do only rudimentary communication and the rest is confirmation bias. Maybe we’ll learn something more surprising.
Normie bias
I expected more complaints of this kind, so I was pleasantly surprised. I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird “nerd woo”. If we’re to be good rationalist we have to recognize that most evidence isn’t neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can’t just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.
Minor complaints
To be honest, a few of the reasons you decide the evidence is “not compelling” are pretty weird. Why does it matter if the dog uses one paw or both paws? Why is it weird that a dog has a “bed time”? What is “seeming disinterest” from the dog and what makes you think you can see that? Why do you expect dogs to strive for brevity and being “more clear”?
I appreciate your response, and my apologies that for time-efficiency reasons I’m only going to respond briefly and to some parts of it.
I’m somewhat sympathetic to this. I do feel as though given large claims e.g. “revolutionary” and the definite rather than the hedge in the title, it was worth doing more than the cursory in the article itself. I haven’t read your comments nor looked at the timing of them, but I imagine some to most readers read the article without seeing these comments. I’m saddened that those readers likely had much too strong a takeaway and upvoted this post.
I agree with the first and not with the second. I think this is lightly suggestive and I strongly suspect LWers who accept this level of evidence as highly suggestive will have some pretty inaccurate models of the world. For example, I do think most mommy-blogger, or pyramid-scheme, etc. things we see all over social media present similar, if not typically higher, levels of evidence.
I’m somewhat new to this community, so FWIW, while I certainly know who Eliezer is and have read some of his stuff, I don’t understand this reference.
I find this quite disappointing, and would have expected the LW community to be better.
I don’t necessarily disagree with this, but I do think the arguments for AI alignment and cryonics have been much more thoughtfully presented, with approximately appropriate calibration.
For dogs at least, there’s a threshold beyond which this would have to reach, to me, to start to become true (same with the title; the behaviors shown don’t necessarily point to me updating my priors). I’ve had three dogs, each of which had clear indicators for wanting to go out (e.g. pawing at the outside door, showing excitement when I asked) and wanting food.
FWIW I absolutely believe this, and the rest of your points e.g. about the videos are well-taken. Thank you for your thoughtful response.
EDIT:
> Please watch this video even if you have time constraints (it works fine at 1.5x speed).
I’m not sure I understand why this was recommended; it didn’t seem notable to me and is more of a lets-feel-good-about-this video than anything.
I am super-duper surprised she says it took a few weeks to teach the Outside button! It took about… 15 minutes to teach my dog to use her Food bell. And then the Outside bell and Treat bells were similarly fast. I don’t think button pressing is inherently harder than bell ringing, so that shouldn’t make a difference.
I guess if the dog was starting at zero training it would take two weeks. (Robin already knew how to Target an item, which she learned after learning hand Touch, which she learned as part of the process of teaching how clicker-like training with positive reinforcers works in the first place. )
I can imagine abstract words like “Tomorrow” and “Where” taking a whole lot longer, but the words that are just ways to obtain concrete things are extremely easy to teach. Outside bells are a very well-known and frequently-done thing. Look them up on Amazon and you’ll see about 20 options for sale.