I agree with a lot of what you said but I am generally skeptical of any emulation that does not work from a bottom up simulation of neurons. We really don’t know about how and what causes consciousness and I think that it can’t be ruled out that something with the same input and outputs at a high level misses out on something important that generate consciousness. I don’t necessarily believe in p zombies, but if they are possible then it seems they would be built by creating something that copies the high level behavior but not the low level functions. Also on a practical level, I don’t know how you could verify a high level recreation is accurate. If my brain is scanned into a supercomputer that can run molecular dynamics on every molecule in my brain then I think there is very little doubt that it is an accurate reflection of my brain. It might not be me in the sense that I don’t have any continuity of consciousness, but it is me in the sense that it would behave like me in every circumstance. Conversely, a high level copy could miss some subtle behavior that is not accounted for in the abstraction used to form the model of the brain. If an LLM was trained on everything I ever said, it could imitate me for a good long while but it wouldn’t think the same way I do. A more complex model would be better but not certain to be perfect. How could we ever be sure that our understanding was such that we didn’t miss something subtle that emerges from the fundamentals? Maybe I’m misinterpreting the post, but I don’t see a huge benefit from reverse engineering the brain in terms of simulating it. Are you suggesting something other than an accurate simulation of each neuron and its connections? If you are, I think that method is liable to miss something important. If you are not I think that understanding each piece but not the whole is sufficient to emulate a brain.
I agree with a lot of what you said but I am generally skeptical of any emulation that does not work from a bottom up simulation of neurons. We really don’t know about how and what causes consciousness and I think that it can’t be ruled out that something with the same input and outputs at a high level misses out on something important that generate consciousness. I don’t necessarily believe in p zombies, but if they are possible then it seems they would be built by creating something that copies the high level behavior but not the low level functions. Also on a practical level, I don’t know how you could verify a high level recreation is accurate. If my brain is scanned into a supercomputer that can run molecular dynamics on every molecule in my brain then I think there is very little doubt that it is an accurate reflection of my brain. It might not be me in the sense that I don’t have any continuity of consciousness, but it is me in the sense that it would behave like me in every circumstance. Conversely, a high level copy could miss some subtle behavior that is not accounted for in the abstraction used to form the model of the brain. If an LLM was trained on everything I ever said, it could imitate me for a good long while but it wouldn’t think the same way I do. A more complex model would be better but not certain to be perfect. How could we ever be sure that our understanding was such that we didn’t miss something subtle that emerges from the fundamentals? Maybe I’m misinterpreting the post, but I don’t see a huge benefit from reverse engineering the brain in terms of simulating it. Are you suggesting something other than an accurate simulation of each neuron and its connections? If you are, I think that method is liable to miss something important. If you are not I think that understanding each piece but not the whole is sufficient to emulate a brain.