I’ve been thinking about this more and I recall a study that’s already been done and which seems to be capturing most of the important data, and which even has built in quality assurance process that provides a measure of assurance that the measurements contain meaningful content… reconstruction of observed images from brain scanning. Once you see the first example it suggests a whole range of iteratively more complicated research projects. One might attempt reconstruction of an audio channel, reconstruction of audio+video, doing structured interview and reconstructing the audio and text transcripts of the subjects vocal production, and perhaps eventually doing video or audio chat of some sort, with an attempt to reconstruct both sides of the conversation from two separate but synchronized brain scans. Neat :-)
ETA: The conversational reconstruction attempt seems like it would have specific cryonics implications, in terms of gaining data on specific social dynamics whose visceral experienced realities are really really important to people when the subject is discussed in near mode.
Do you know how that “reconstruction” works? They are not just displaying brain data, with a little post-processing added. If you play the video, you’ll see completely spurious text floating around in some of the reconstructions. That’s because the reconstructed video is a weighted sum over a few hundred videos taken from Youtube. They have a computational model of the mapping from visual input to neural activity, but when they invert the mapping, they assume that the input was some linear combination of those Youtube videos. The reconstruction is just about determining the weights. So maybe in reality you just see a caravan of elephants walking in front of you, but your “reconstructed” visual experience also has text from a music video flickering past, because that music video supplies one of the basis vectors and it was assigned a high similarity by the model.
If you click through to the discussion on Youtube, there are commenters freaking out and speculating that the words must be subconscious thoughts of the people who had their brains scanned. So we’re getting several lessons at once from this exercise: If people are reconstructed from a lossy backup, there may be spurious insertions as well as lost data; and, the non-technical public will interpret artefacts as real, in a creative way which also attributes much more power to a technology than it possesses.
I didn’t “know” the details of the reconstruction, but I suspected it was relatively simple, and you’ve confirmed that. Also, I agree denotationally with everything you said about inevitable bugs and a public that leaves something to be desired. Nonetheless it is neat anyway, because sturgeon’s law (90% of everything is crap) is roughly right, and this is non-crappy enough that it deserves some appreciation :-)
Also, if someone was going to use non-destructively collect data from various sources to attempt a side-load by constraining on observable frozen anatomy, recordable functional outcomes, etc, etc, then this general kind of raw data might help constrain the final model, or speed up the annealing, by completely ruling out certain sorts of overall neural facts, like what things will trigger vivid recognition or not (and with what sort of emotional resonances).
I’ve been thinking about this more and I recall a study that’s already been done and which seems to be capturing most of the important data, and which even has built in quality assurance process that provides a measure of assurance that the measurements contain meaningful content… reconstruction of observed images from brain scanning. Once you see the first example it suggests a whole range of iteratively more complicated research projects. One might attempt reconstruction of an audio channel, reconstruction of audio+video, doing structured interview and reconstructing the audio and text transcripts of the subjects vocal production, and perhaps eventually doing video or audio chat of some sort, with an attempt to reconstruct both sides of the conversation from two separate but synchronized brain scans. Neat :-)
ETA: The conversational reconstruction attempt seems like it would have specific cryonics implications, in terms of gaining data on specific social dynamics whose visceral experienced realities are really really important to people when the subject is discussed in near mode.
Do you know how that “reconstruction” works? They are not just displaying brain data, with a little post-processing added. If you play the video, you’ll see completely spurious text floating around in some of the reconstructions. That’s because the reconstructed video is a weighted sum over a few hundred videos taken from Youtube. They have a computational model of the mapping from visual input to neural activity, but when they invert the mapping, they assume that the input was some linear combination of those Youtube videos. The reconstruction is just about determining the weights. So maybe in reality you just see a caravan of elephants walking in front of you, but your “reconstructed” visual experience also has text from a music video flickering past, because that music video supplies one of the basis vectors and it was assigned a high similarity by the model.
If you click through to the discussion on Youtube, there are commenters freaking out and speculating that the words must be subconscious thoughts of the people who had their brains scanned. So we’re getting several lessons at once from this exercise: If people are reconstructed from a lossy backup, there may be spurious insertions as well as lost data; and, the non-technical public will interpret artefacts as real, in a creative way which also attributes much more power to a technology than it possesses.
I didn’t “know” the details of the reconstruction, but I suspected it was relatively simple, and you’ve confirmed that. Also, I agree denotationally with everything you said about inevitable bugs and a public that leaves something to be desired. Nonetheless it is neat anyway, because sturgeon’s law (90% of everything is crap) is roughly right, and this is non-crappy enough that it deserves some appreciation :-)
Also, if someone was going to use non-destructively collect data from various sources to attempt a side-load by constraining on observable frozen anatomy, recordable functional outcomes, etc, etc, then this general kind of raw data might help constrain the final model, or speed up the annealing, by completely ruling out certain sorts of overall neural facts, like what things will trigger vivid recognition or not (and with what sort of emotional resonances).