(Well, a related argument anyway. WBE is about scanning and simulating the brain rather than understanding it, but I would make a similar argument using “hard-to-scan” and/or “hard-to-simulate” things the brain does, rather than “hard-understand” things the brain does, which is what I was nominally blogging about. There’s a lot of overlap between those anyway; the examples I put in mostly work for both.)
Does anyone know of a good technical overview of why it seems hard to get Whole Brain Emulations before we get neuromorphic AGI?
I think maybe I read a PDF that made this case years ago, but I don’t know where.
I haven’t seen such a document but I’d be interested to read it too. I made an argument to that effect here: https://www.lesswrong.com/posts/PTkd8nazvH9HQpwP8/building-brain-inspired-agi-is-infinitely-easier-than
(Well, a related argument anyway. WBE is about scanning and simulating the brain rather than understanding it, but I would make a similar argument using “hard-to-scan” and/or “hard-to-simulate” things the brain does, rather than “hard-understand” things the brain does, which is what I was nominally blogging about. There’s a lot of overlap between those anyway; the examples I put in mostly work for both.)
Great. This post is exactly the sort of thing that I was thinking about.