That’s an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate “black” and “dog” rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.
That’s an interesting insight. There should be another path though: visual imagination, which already runs at (roughly?) the same speed as visual perception. We can already detect the images someone is imagining to some extent, and with uploads directly putting images into their visual cortex should be comparatively straightforward, so we can skip all that rendering geometric forms into pixels and decoding pixels back into geometric forms stuff. If you want the upload to see a black dog you just stimulate “black” and “dog” rather than painting anything.
Yes! I suspect that eventually this could be an interesting application of cheap memristor/neuromorphic designs, if they become economically viable.
It should be possible to exploit the visual imagination/dreaming circuity the brain has and make it more consciously controllable for an AGI, perhaps even to the point of being able to enter lucid dream worlds while fully conscious.