What’s going to stop me from duplicating just the canine prefrontal cortex and experimenting with it? It’s a nice little classifier / decision maker, I’m sure it has other uses...
What’s going to stop you is that the prefrontal cortex is just one part of a larger whole. It may be possible to isolate that part, but doing so may be very difficult. Now, if your simulation were running in real time, you could just spawn off a bunch of different experiments pursuing different ideas for how to isolate and use the prefrontal cortex, and just keep doing this until you find something that works. But, if your simulation is running at 1/10000th realtime, as JamesAndrix suggests in his hypothetical, the prospects of this type of method seem dim.
Of course, maybe the existence of the dog brain simulation is sufficient to spur advances in neuroscience to the point where you could just isolate the functioning of the cortex, without the need for millions of experimental runs. Even so, your resulting module is still going to be too slow to be an existential threat.
Just the capacity to reliably emulate major functional regions of vertebrate brain already puts you on the threshold of creating big powerful nonhuman AI.
The threshold, yes. But that threshold is still nontrivial to cross. The question is, given that we can reliably emulate major functional regions of the brain, is it easier to cross the threshold to nonhuman AI, or to full emulations of humans? There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
It may be possible to isolate that part, but doing so may be very difficult.
Why would it be difficult? How would it be difficult?
There is a rather utopian notion of mind uploading, according to which you blindly scan a brain without understanding it, and then turn that data directly into a simulation. I’m aware of two such scanning paradigms. In one, you freeze the brain, microtome it, and then image the sections. In the other you do high-resolution imaging of the living brain (e.g. fMRI) and then you construct a state-machine model for each small 3D volume.
To turn the images of those microtomed sections into an accurate dynamical model requires a lot of interpretive knowledge. The MRI-plus-inference pathway sounds much more plausible as a blind path to brain simulation. But either way, you are going to know what the physical 3D location of every element in your simulation was, and functional neuroanatomy is already quite sophisticated. It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
If you can simulate a human, you can immediately start experimenting with nonhuman cognitive architectures by lobotomizing or lesioning the simulation. But this would already be true for simulated animal brains as well.
It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
That’s true, but ultimately the regions of the brain are not completely islands. The circuitry connecting them is itself intricate. You may, for instance, be able to extract the visual cortex and get it to do some computer vision for you, but I doubt extracting a prefontal cortex will be useful without all the subsystems it depends on. More importantly, how to wire up new configurations (maybe you want to have a double prefrontal cortex: twice the cognitive power!) strikes me as a fundamentally difficulty problem. At that point you probably need to have some legitimate high level understanding of the components and their connective behaviors to succeed. To contrast, a vanilla emulation where you aren’t modifying the architecture or performing virtual surgery requires no such high level understanding.
What’s going to stop you is that the prefrontal cortex is just one part of a larger whole. It may be possible to isolate that part, but doing so may be very difficult. Now, if your simulation were running in real time, you could just spawn off a bunch of different experiments pursuing different ideas for how to isolate and use the prefrontal cortex, and just keep doing this until you find something that works. But, if your simulation is running at 1/10000th realtime, as JamesAndrix suggests in his hypothetical, the prospects of this type of method seem dim.
Of course, maybe the existence of the dog brain simulation is sufficient to spur advances in neuroscience to the point where you could just isolate the functioning of the cortex, without the need for millions of experimental runs. Even so, your resulting module is still going to be too slow to be an existential threat.
The threshold, yes. But that threshold is still nontrivial to cross. The question is, given that we can reliably emulate major functional regions of the brain, is it easier to cross the threshold to nonhuman AI, or to full emulations of humans? There is virtually no barrier to the second threshold, while the first one still has nontrivial problems to be solved.
Why would it be difficult? How would it be difficult?
There is a rather utopian notion of mind uploading, according to which you blindly scan a brain without understanding it, and then turn that data directly into a simulation. I’m aware of two such scanning paradigms. In one, you freeze the brain, microtome it, and then image the sections. In the other you do high-resolution imaging of the living brain (e.g. fMRI) and then you construct a state-machine model for each small 3D volume.
To turn the images of those microtomed sections into an accurate dynamical model requires a lot of interpretive knowledge. The MRI-plus-inference pathway sounds much more plausible as a blind path to brain simulation. But either way, you are going to know what the physical 3D location of every element in your simulation was, and functional neuroanatomy is already quite sophisticated. It won’t be hard to single out the sim-neurons specific to a particular anatomical macroregion.
If you can simulate a human, you can immediately start experimenting with nonhuman cognitive architectures by lobotomizing or lesioning the simulation. But this would already be true for simulated animal brains as well.
That’s true, but ultimately the regions of the brain are not completely islands. The circuitry connecting them is itself intricate. You may, for instance, be able to extract the visual cortex and get it to do some computer vision for you, but I doubt extracting a prefontal cortex will be useful without all the subsystems it depends on. More importantly, how to wire up new configurations (maybe you want to have a double prefrontal cortex: twice the cognitive power!) strikes me as a fundamentally difficulty problem. At that point you probably need to have some legitimate high level understanding of the components and their connective behaviors to succeed. To contrast, a vanilla emulation where you aren’t modifying the architecture or performing virtual surgery requires no such high level understanding.