I think we have much more disagreement about psychology than about AI, though I admit to low certainty about the psychology too.
About AI, my point was that in understand the problem, the training loop take roughly the role of evolution and the model take that off the evolved agent—with implications to comparison of success, and possibly to identifying what’s missing. I did refer to the fact that algorithmically we took ideas from the human brain to the training loop, and it therefore make sense for it to be algorithmically more analogous to the brain. Given that clarification—do you still mostly disagree? (If not—how do you recommend to change the post and make it clearer?)
Adding “short term memory” to the picture is interesting, but then it’s there any mechanism for it to become long-term?
About the psychology: I do find the genetic bottleneck argument intuitively convincing, but think that we have reasons to distrust this intuition. There is often huge disparity between data in its most condensed form, and data in a form that is convenient to use in deployment. Think about the difference in length between a code written in functional/declarative language, and it’s assembly code. I have literally no intuition as to what can be done with 10 megabytes of condensed python—but I guess that it is more than enough to automate a human, if you know what code to write. While there probably is a lot of redundancy in the genome, it seem as least likely that there is huge redundancy of synapses, as their use is not just to store information, but mostly to fit the needed information manipulations.
I think we have much more disagreement about psychology than about AI, though I admit to low certainty about the psychology too.
About AI, my point was that in understand the problem, the training loop take roughly the role of evolution and the model take that off the evolved agent—with implications to comparison of success, and possibly to identifying what’s missing. I did refer to the fact that algorithmically we took ideas from the human brain to the training loop, and it therefore make sense for it to be algorithmically more analogous to the brain. Given that clarification—do you still mostly disagree? (If not—how do you recommend to change the post and make it clearer?)
Adding “short term memory” to the picture is interesting, but then it’s there any mechanism for it to become long-term?
About the psychology: I do find the genetic bottleneck argument intuitively convincing, but think that we have reasons to distrust this intuition. There is often huge disparity between data in its most condensed form, and data in a form that is convenient to use in deployment. Think about the difference in length between a code written in functional/declarative language, and it’s assembly code. I have literally no intuition as to what can be done with 10 megabytes of condensed python—but I guess that it is more than enough to automate a human, if you know what code to write. While there probably is a lot of redundancy in the genome, it seem as least likely that there is huge redundancy of synapses, as their use is not just to store information, but mostly to fit the needed information manipulations.