There’s also whole-brain emulation, which doesn’t require nanotech to function—just slightly better scanners, substantially better neuroscience, and exponentially better computers.
We have plenty of models of neurons and some of them imitate neurons very well.
Eugene Izhikevich simulated an entire human brain equivalent with his model and he saw some pretty interesting emergent behaviour (Granted, the anatomy had to be generated randomly at every iteration, so we still need better computers).
That’s true, but we need to get it really, really close. Even relatively small statistical deviations from the behavior of the real neurons are probably intolerable. Besides, real neurons are not interchangeable: they have unique statistical biases and are influenced by a variety of factors not modeled by modern simulations, like neurotransmitter diffusion, glial activity, and subtle quirks of specific dendrites and axons.
Right now, even if you gave us a high-speed brain scanner, a high-speed computer, and an unlimited budget, we wouldn’t have the capability to interpret the image data the scanner produced, or even be quite sure which immunostains to use for the optical imaging to pin down the required details. I expect it to take at least five to ten years for us to get the theoretical details ironed out.
Vitrification seems to work pretty well, in terms of preserving relevant details. Observing some of those features is going to require an as-yet-not-fully-understood immunostaining process, but that’s under neuroscience. As far as the scanners go, the resolution is already adequate or near-adequate for most SEM technologies. It’s just a question of adding more beams and developing more automated methods, so the scanning can be more parallel.
PZ Meyers has unreasonably high standards for ‘relevant details.’ Demanding one millisecond total fixation time (with every atom being in precisely the same position as it was during life) is totally ridiculous. If you want to study intraneuron cell biology, sure, you need that, but for brain emulation, all you care about is the connection-ism of the network, and the long term statistical biases of particular neurons’ synaptic connections (plus glial traits, naturally), which is (probably) visible from features many orders of magnitude more durable than the kinds of data he’s talking about. Also, his comments about accelerating the speed of the network are kind of bizarrely ignorant, given how smart a guy he clearly is.
The only way the issues he mentions are problematic is if high-detail inter-neuron computing turns out to be necessary AND long-term state dependent, which the evidence suggests against (the blue brain project has produced realistic synchronized firing activity in a simulated neocortical column using relatively simple neuron models).
As far as a reference goes, there’s this study, in which they took a rat’s brain, vitrified it, and examined it at fine detail, demonstrating “good to excellent” preservation of gross cellular anatomy.
PZ Meyers has unreasonably high standards for ‘relevant details.’
Well, he’s a developmental biologist specialized in the vertebrate nervous system.
Demanding one millisecond total fixation time (with every atom being in precisely the same position as it was during life) is totally ridiculous. If you want to study intraneuron cell biology, sure, you need that, but for brain emulation, all you care about is the connection-ism of the network, and the long term statistical biases of particular neurons’ synaptic connections (plus glial traits, naturally), which is (probably) visible from features many orders of magnitude more durable than the kinds of data he’s talking about.
One millisecond fixation time might be an excessive requirement, but in order to perform an emulation accurate enough to preserve the self, you will probably need much more detail than the network topology and some statistics. Synapses have fine surface features that may well be relevant, and neurons may have relevant internal state stored as DNA methylation patterns, concentrations of various chemical, maybe even protein folding states. Some of these features are probably difficult to preserve and possibly difficult to scan.
EDIT:
As far as a reference goes, there’s this study, in which they took a rat’s brain, vitrified it, and examined it at fine detail, demonstrating “good to excellent” preservation of gross cellular anatomy.
Actually they vitrified 475 micrometre slices of the hippocampus of rat brains. It’s no mystery that small samples can be vitrified without using toxic concentrations of cryoprotectants.
Moreover, the paper says:
“Finally, all slices were transferred to the two wells of an Oslo-type recording chamber [ … ] and incubated with aCSF at 34–37 C for at least 1 h before being used in experiments.”
“Following initial incubation for 60 min or more at 35 C in aCSF to allow recovery from the shock of slice preparation, [ … ]”
I’m not a biologist so I might be missing something, but my understanding is that this means that somehow ischemia is not an issue here, while it certainly is when dealing with a whole brain.
One millisecond fixation time might be an excessive requirement, but in order to perform an emulation accurate enough to preserve the self, you will probably need much more detail than the network topology and some statistics. Synapses have fine surface features that may well be relevant, and neurons may have relevant internal state stored as DNA methylation patterns, concentrations of various chemical, maybe even protein folding states. Some of these features are probably difficult to preserve and possibly difficult to scan.
The surface details we can read with SEM, and we can observe chemical/protein concentrations through immunostaining and sub-wavelength optical microscopy (SEM and SWOM hybrid is my bet for the technology we wind up using). I don’t think there’s strong evidence for DNA methylation or protein state being used for long-term data storage. If evidence arises, we’ll re-evaluate then. But modern neuron models don’t account for those, and, again, function realistically, so they’re not critical for the computation. The details we’re reading likely wouldn’t have to be simulated outright—they would just alter the shape of the probability distribution your simulation is sampling from. A lot of the fine stuff is so noisy, it isn’t practical to store data in it. The stuff we know is involved we can definitely preserve. As a general rule, if the data is lost within minutes of death, it’s probably also lost during the average workday.
Actually they vitrified 475 micrometre slices of the hippocampus of rat brains. It’s no mystery that small samples can be vitrified without using toxic concentrations of cryoprotectants.
I honestly don’t think cryoprotectant damage is anywhere near the big problem here. I’m sure it does cellular damage, but it seems to leave cell morphology essentially intact, and isn’t reactive enough to really screw up most of the things we know we have to care about, in terms of cell state. Ischemia is a bigger problem, and one of my points of skepticism about non-standby cryonics. Four plus hours at room temperature simply seems too long. That said, as our understanding of cell death improves, we’re starting to notice that most brain death seems to be failure of the cells’ oxygen metabolisms, not failure of synaptic linkings. I’d like to see studies done on exactly how long it takes relevant neural details to begin to break down at room temperature. That said, flatlining cases suggest that there’s some reason to hope for the time being. I’d like to see the science done, in any case.
There’s also whole-brain emulation, which doesn’t require nanotech to function—just slightly better scanners, substantially better neuroscience, and exponentially better computers.
We have plenty of models of neurons and some of them imitate neurons very well.
Eugene Izhikevich simulated an entire human brain equivalent with his model and he saw some pretty interesting emergent behaviour (Granted, the anatomy had to be generated randomly at every iteration, so we still need better computers).
That’s true, but we need to get it really, really close. Even relatively small statistical deviations from the behavior of the real neurons are probably intolerable. Besides, real neurons are not interchangeable: they have unique statistical biases and are influenced by a variety of factors not modeled by modern simulations, like neurotransmitter diffusion, glial activity, and subtle quirks of specific dendrites and axons.
Right now, even if you gave us a high-speed brain scanner, a high-speed computer, and an unlimited budget, we wouldn’t have the capability to interpret the image data the scanner produced, or even be quite sure which immunostains to use for the optical imaging to pin down the required details. I expect it to take at least five to ten years for us to get the theoretical details ironed out.
It requires substantially better scanners, and a fixation process that preserves all the relevant features.
Vitrification seems to work pretty well, in terms of preserving relevant details. Observing some of those features is going to require an as-yet-not-fully-understood immunostaining process, but that’s under neuroscience. As far as the scanners go, the resolution is already adequate or near-adequate for most SEM technologies. It’s just a question of adding more beams and developing more automated methods, so the scanning can be more parallel.
Do you have any reference?
According to PZ Myers you can only do that with exceptionally small samples of tissue.
PZ Meyers has unreasonably high standards for ‘relevant details.’ Demanding one millisecond total fixation time (with every atom being in precisely the same position as it was during life) is totally ridiculous. If you want to study intraneuron cell biology, sure, you need that, but for brain emulation, all you care about is the connection-ism of the network, and the long term statistical biases of particular neurons’ synaptic connections (plus glial traits, naturally), which is (probably) visible from features many orders of magnitude more durable than the kinds of data he’s talking about. Also, his comments about accelerating the speed of the network are kind of bizarrely ignorant, given how smart a guy he clearly is.
The only way the issues he mentions are problematic is if high-detail inter-neuron computing turns out to be necessary AND long-term state dependent, which the evidence suggests against (the blue brain project has produced realistic synchronized firing activity in a simulated neocortical column using relatively simple neuron models).
As far as a reference goes, there’s this study, in which they took a rat’s brain, vitrified it, and examined it at fine detail, demonstrating “good to excellent” preservation of gross cellular anatomy.
Well, he’s a developmental biologist specialized in the vertebrate nervous system.
One millisecond fixation time might be an excessive requirement, but in order to perform an emulation accurate enough to preserve the self, you will probably need much more detail than the network topology and some statistics. Synapses have fine surface features that may well be relevant, and neurons may have relevant internal state stored as DNA methylation patterns, concentrations of various chemical, maybe even protein folding states. Some of these features are probably difficult to preserve and possibly difficult to scan.
EDIT:
Actually they vitrified 475 micrometre slices of the hippocampus of rat brains. It’s no mystery that small samples can be vitrified without using toxic concentrations of cryoprotectants.
Moreover, the paper says: “Finally, all slices were transferred to the two wells of an Oslo-type recording chamber [ … ] and incubated with aCSF at 34–37 C for at least 1 h before being used in experiments.”
“Following initial incubation for 60 min or more at 35 C in aCSF to allow recovery from the shock of slice preparation, [ … ]”
I’m not a biologist so I might be missing something, but my understanding is that this means that somehow ischemia is not an issue here, while it certainly is when dealing with a whole brain.
The surface details we can read with SEM, and we can observe chemical/protein concentrations through immunostaining and sub-wavelength optical microscopy (SEM and SWOM hybrid is my bet for the technology we wind up using). I don’t think there’s strong evidence for DNA methylation or protein state being used for long-term data storage. If evidence arises, we’ll re-evaluate then. But modern neuron models don’t account for those, and, again, function realistically, so they’re not critical for the computation. The details we’re reading likely wouldn’t have to be simulated outright—they would just alter the shape of the probability distribution your simulation is sampling from. A lot of the fine stuff is so noisy, it isn’t practical to store data in it. The stuff we know is involved we can definitely preserve. As a general rule, if the data is lost within minutes of death, it’s probably also lost during the average workday.
I honestly don’t think cryoprotectant damage is anywhere near the big problem here. I’m sure it does cellular damage, but it seems to leave cell morphology essentially intact, and isn’t reactive enough to really screw up most of the things we know we have to care about, in terms of cell state. Ischemia is a bigger problem, and one of my points of skepticism about non-standby cryonics. Four plus hours at room temperature simply seems too long. That said, as our understanding of cell death improves, we’re starting to notice that most brain death seems to be failure of the cells’ oxygen metabolisms, not failure of synaptic linkings. I’d like to see studies done on exactly how long it takes relevant neural details to begin to break down at room temperature. That said, flatlining cases suggest that there’s some reason to hope for the time being. I’d like to see the science done, in any case.