I appreciate your response, and my apologies that for time-efficiency reasons I’m only going to respond briefly and to some parts of it.
I don’t think it’s fair to say my dismissal of concerns is “cursory” if you include my comments under the post. Maybe the article itself didn’t go deep enough, partly I wanted it to scan well, partly I wanted to see good criticism so I could update/come up with good responses, because it’s not easy to preempt every criticism.
I’m somewhat sympathetic to this. I do feel as though given large claims e.g. “revolutionary” and the definite rather than the hedge in the title, it was worth doing more than the cursory in the article itself. I haven’t read your comments nor looked at the timing of them, but I imagine some to most readers read the article without seeing these comments. I’m saddened that those readers likely had much too strong a takeaway and upvoted this post.
As for cursory evidence, yes it’s mostly that, but cursory evidence can still be good Bayesian evidence. I think there’s enough to conclude there’s something interesting going on.
This stuff is highly suggestive,
I agree with the first and not with the second. I think this is lightly suggestive and I strongly suspect LWers who accept this level of evidence as highly suggestive will have some pretty inaccurate models of the world. For example, I do think most mommy-blogger, or pyramid-scheme, etc. things we see all over social media present similar, if not typically higher, levels of evidence.
What I had in the back of my mind is “if Eliezer gets to do it, then I get to do it too”.
I’m somewhat new to this community, so FWIW, while I certainly know who Eliezer is and have read some of his stuff, I don’t understand this reference.
I think the community simply likes boldly stated (and especially contrarian) claims, as long as it doesn’t go too far off-balance.
I find this quite disappointing, and would have expected the LW community to be better.
I can easily imagine structurally similar arguments from someone who thinks AI alignment or cryonics are weird “nerd woo”. If we’re to be good rationalist we have to recognize that most evidence isn’t neatly packaged for us in papers (or gwern articles) with hard numbers and rigorous analysis. We can’t just exclude the messy parts of the world and expect to arrive at a useful worldview. Sometimes interesting things happen on Instagram and Tiktok.
I don’t necessarily disagree with this, but I do think the arguments for AI alignment and cryonics have been much more thoughtfully presented, with approximately appropriate calibration.
steelman: even if your pet can tell you what they actually want to do instead of your having to guess, that’s a revolution in communication).
For dogs at least, there’s a threshold beyond which this would have to reach, to me, to start to become true (same with the title; the behaviors shown don’t necessarily point to me updating my priors). I’ve had three dogs, each of which had clear indicators for wanting to go out (e.g. pawing at the outside door, showing excitement when I asked) and wanting food.
I didn’t consciously go for any “maneuvers” to misrepresent things.
FWIW I absolutely believe this, and the rest of your points e.g. about the videos are well-taken. Thank you for your thoughtful response.
EDIT:
> Please watch this video even if you have time constraints (it works fine at 1.5x speed).
I’m not sure I understand why this was recommended; it didn’t seem notable to me and is more of a lets-feel-good-about-this video than anything.
I appreciate your response, and my apologies that for time-efficiency reasons I’m only going to respond briefly and to some parts of it.
I’m somewhat sympathetic to this. I do feel as though given large claims e.g. “revolutionary” and the definite rather than the hedge in the title, it was worth doing more than the cursory in the article itself. I haven’t read your comments nor looked at the timing of them, but I imagine some to most readers read the article without seeing these comments. I’m saddened that those readers likely had much too strong a takeaway and upvoted this post.
I agree with the first and not with the second. I think this is lightly suggestive and I strongly suspect LWers who accept this level of evidence as highly suggestive will have some pretty inaccurate models of the world. For example, I do think most mommy-blogger, or pyramid-scheme, etc. things we see all over social media present similar, if not typically higher, levels of evidence.
I’m somewhat new to this community, so FWIW, while I certainly know who Eliezer is and have read some of his stuff, I don’t understand this reference.
I find this quite disappointing, and would have expected the LW community to be better.
I don’t necessarily disagree with this, but I do think the arguments for AI alignment and cryonics have been much more thoughtfully presented, with approximately appropriate calibration.
For dogs at least, there’s a threshold beyond which this would have to reach, to me, to start to become true (same with the title; the behaviors shown don’t necessarily point to me updating my priors). I’ve had three dogs, each of which had clear indicators for wanting to go out (e.g. pawing at the outside door, showing excitement when I asked) and wanting food.
FWIW I absolutely believe this, and the rest of your points e.g. about the videos are well-taken. Thank you for your thoughtful response.
EDIT:
> Please watch this video even if you have time constraints (it works fine at 1.5x speed).
I’m not sure I understand why this was recommended; it didn’t seem notable to me and is more of a lets-feel-good-about-this video than anything.