Bostrom says it is probably infeasible to ‘download’ large chunks of data from one brain to another, because brains are idiosyncratically formatted and meaning is likely spread holistically through patterns in a large number of neurons (p46). Do you agree? Do you think this puts such technology out of reach until after human-level machine intelligence?
It may be possible to decode the encoding used in different brains. Representations in particular sensory modalities are localized to a few square inches of cortex.
I think it’s probably enough of an obstacle that it’s more likely an AGI will be developed first. In that sense I do agree with Bostrom. However, I wouldn’t say it’s completely infeasible, rather that it will require considerable advances in pattern recognition technology, our understanding of the brain, and our technological ability to interface with the brain first. The idiosyncratic morphology and distributed/non-localized information storage make for a very difficult engineering problem, but I’m optimistic that it can be overcome in some way or another.
We’ve already had some (granted, very limited) success with decoding imagery from the visual cortex through “dumb” (non-AGI) machine learning algorithms, which makes deeper interaction seem at least possible. If we can make advances in the above-mentioned fields, I would guess the biggest limitation will be that we’ll never have a standardized “plug’n’play” protocol for brains—interfaces will require specialized tuning for each individual and a learning period during which the algorithms can “figure out” how your brain is wired up.
Bostrom says it is probably infeasible to ‘download’ large chunks of data from one brain to another, because brains are idiosyncratically formatted and meaning is likely spread holistically through patterns in a large number of neurons (p46). Do you agree? Do you think this puts such technology out of reach until after human-level machine intelligence?
It may be possible to decode the encoding used in different brains. Representations in particular sensory modalities are localized to a few square inches of cortex.
I think it’s probably enough of an obstacle that it’s more likely an AGI will be developed first. In that sense I do agree with Bostrom. However, I wouldn’t say it’s completely infeasible, rather that it will require considerable advances in pattern recognition technology, our understanding of the brain, and our technological ability to interface with the brain first. The idiosyncratic morphology and distributed/non-localized information storage make for a very difficult engineering problem, but I’m optimistic that it can be overcome in some way or another.
We’ve already had some (granted, very limited) success with decoding imagery from the visual cortex through “dumb” (non-AGI) machine learning algorithms, which makes deeper interaction seem at least possible. If we can make advances in the above-mentioned fields, I would guess the biggest limitation will be that we’ll never have a standardized “plug’n’play” protocol for brains—interfaces will require specialized tuning for each individual and a learning period during which the algorithms can “figure out” how your brain is wired up.