I downvoted this post because it is basically meta discussions above arguments from authority and tribalism: Andrew Ng and MIRI == good, turns out Jeff Hawkins influenced Ng and shares some conceptual ideas with MIRI, therefore Hawkins == good. That’s faulty reasoning which has the capability to reinforce wrong beliefs.
Tell me, what about Hawkins/Numentia’s work makes it wrong or right on its own merits? Why or why not is it likely to lead to capable general purpose intelligences?
I didn’t see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in “intelligence research” (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about “tribalism” and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.
As far as I can tell, you’ve misunderstood what I was trying to do with this post. I’m not claiming that Hawkins’ work is worth pursuing further; passive_fist’s analysis seems pretty plausible to me. I was just trying to give people some information that they may not have on how some ideas developed, to help them build a better model of such things.
(I did not downvote you. If you thought that I was arguing for further work towards Hawkins’ progam, then your comment would be justified, and in any case this is a worthwhile thing for me to explicitly disclaim.)
I downvoted this post because it is basically meta discussions above arguments from authority and tribalism: Andrew Ng and MIRI == good, turns out Jeff Hawkins influenced Ng and shares some conceptual ideas with MIRI, therefore Hawkins == good. That’s faulty reasoning which has the capability to reinforce wrong beliefs.
Tell me, what about Hawkins/Numentia’s work makes it wrong or right on its own merits? Why or why not is it likely to lead to capable general purpose intelligences?
I didn’t see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in “intelligence research” (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about “tribalism” and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.
As far as I can tell, you’ve misunderstood what I was trying to do with this post. I’m not claiming that Hawkins’ work is worth pursuing further; passive_fist’s analysis seems pretty plausible to me. I was just trying to give people some information that they may not have on how some ideas developed, to help them build a better model of such things.
(I did not downvote you. If you thought that I was arguing for further work towards Hawkins’ progam, then your comment would be justified, and in any case this is a worthwhile thing for me to explicitly disclaim.)