I’m curious about this study’s implications with regards to whole brain emulation as well. Based on my extremely limited understanding of mind-uploading, my initial thought is that scanning and uploading a specific individual’s brain would not necessarily rely on a previously known model of human anatomy. The brain could look like noodles in a bowl of spaghetti and the appropriate mind-uploading technology should be able to accurately simulate that. Different people’s brains look different, so it seems like the technology would need to be extremely sensitive to detect the exact, unique brain state of an individual (organization of fibers as well as molecular states). My second thought is that perhaps such technology would need to rely on a high-resolution model of the wiring in the average human brain, and would scan an individual’s brain while using a general model as a “guide,” I suppose.
I guess what I’m really saying is, I don’t really have anything of substance to contribute… but would also love someone with more expertise to shed some light on this!
My memory from Ken Hayworth’s talk was that the problems were 1)plastinating brains in the first place without losing information, 2) being able to slice them thinly enough that all the information is scannable, again without the slicing destroying anything, 3) having enough cheap electron microscopes that it wouldn’t take thousands of years to scan a single brain. It seems to me there’s a fourth problem, doing something useful with the scanned information, but I don’t remember much talk about that one.
So not really helped by how connections are shaped, as far as I can understand.
I’m curious about this study’s implications with regards to whole brain emulation as well. Based on my extremely limited understanding of mind-uploading, my initial thought is that scanning and uploading a specific individual’s brain would not necessarily rely on a previously known model of human anatomy. The brain could look like noodles in a bowl of spaghetti and the appropriate mind-uploading technology should be able to accurately simulate that. Different people’s brains look different, so it seems like the technology would need to be extremely sensitive to detect the exact, unique brain state of an individual (organization of fibers as well as molecular states). My second thought is that perhaps such technology would need to rely on a high-resolution model of the wiring in the average human brain, and would scan an individual’s brain while using a general model as a “guide,” I suppose.
I guess what I’m really saying is, I don’t really have anything of substance to contribute… but would also love someone with more expertise to shed some light on this!
My memory from Ken Hayworth’s talk was that the problems were 1)plastinating brains in the first place without losing information, 2) being able to slice them thinly enough that all the information is scannable, again without the slicing destroying anything, 3) having enough cheap electron microscopes that it wouldn’t take thousands of years to scan a single brain. It seems to me there’s a fourth problem, doing something useful with the scanned information, but I don’t remember much talk about that one.
So not really helped by how connections are shaped, as far as I can understand.