Elon Musk have argued that humans can take in a lot of information through vision (by looking at a picture for one second, you can take in a lot of information). Text/speech however is not very information dense. He argues that therefore since we use keyboards or speech to communicate information outwards, it takes a long time.
One possibility is that AI could help interpreting the data uploaded, and filling in details to make the uploaded information more useful. For example you could “send” an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you would have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.
The neuralink would only need to increase the productivity of an occupation by a few percent, to be worth the investment of 3000-4000 USD that Elon Musk believes the price will drop to.
I think that the AI here is going to have to not just fill in the blanks but convert to a whole new intermediary format. I say this because there are lots of people who despite from the outside appearing normal don’t even see images. A less extreme example would be the people who do and don’t subvocalise whilst reading—I know that when I’m stuck in the middle of a novel it’s basically just a movie playing in my head, there’s no conscious spelling out of the words, but for other people there is a narrator. Because of this the large differences between internal brain formats will necessitate some kind of common format as an intermediary.
Personally I’m more interested in seeing (ethics aside) what happens when you give this to a child. If you stick a direct feed to a computer+internet into someones brain whilst it’s still forming, I would not be surprised if what comes out the other end is quite unlike a regular human. The base model at the moment already has a 6 axis imu+compass+barometer—it would not surprise me if that information just got fused into that persons regular experience like those compass belts people have started wearing.
I find it likely that Neuralink will succeed with increasing the bandwith speed for “uploading” information from the brain, and I think it will do so with the help of AI. For example you could send an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.
I would be very interested to know if self-reported variation in mental imagery will significantly affect the ability to use such a system. Also, how trainable that is as a skill.
Elon Musk have argued that humans can take in a lot of information through vision (by looking at a picture for one second, you can take in a lot of information). Text/speech however is not very information dense. He argues that therefore since we use keyboards or speech to communicate information outwards, it takes a long time.
One possibility is that AI could help interpreting the data uploaded, and filling in details to make the uploaded information more useful. For example you could “send” an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you would have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.
The neuralink would only need to increase the productivity of an occupation by a few percent, to be worth the investment of 3000-4000 USD that Elon Musk believes the price will drop to.
I think that the AI here is going to have to not just fill in the blanks but convert to a whole new intermediary format. I say this because there are lots of people who despite from the outside appearing normal don’t even see images. A less extreme example would be the people who do and don’t subvocalise whilst reading—I know that when I’m stuck in the middle of a novel it’s basically just a movie playing in my head, there’s no conscious spelling out of the words, but for other people there is a narrator. Because of this the large differences between internal brain formats will necessitate some kind of common format as an intermediary.
Personally I’m more interested in seeing (ethics aside) what happens when you give this to a child. If you stick a direct feed to a computer+internet into someones brain whilst it’s still forming, I would not be surprised if what comes out the other end is quite unlike a regular human. The base model at the moment already has a 6 axis imu+compass+barometer—it would not surprise me if that information just got fused into that persons regular experience like those compass belts people have started wearing.
It does seem like a reasonable analogy that the Neuralink could be like a “sixth sense” or an extra (very complex) muscle.
I would be very interested to know if self-reported variation in mental imagery will significantly affect the ability to use such a system. Also, how trainable that is as a skill.