The more I see of AI, the more I think we need something like Neuralink to make advances as swiftly as possible. Humans are already aligned-enough to human values (albeit not, in general, to one another), and if we can augment human intelligence fast enough, we might be able to solve alignment before everything goes bad. But that doesn’t seem to be the cool thing to work on in the tech world nowadays.
Ignoring that physical advancements are harder than digital ones—inserting probes into our brains even more so given the medical and regulatory hurdles—that would also augment our capacity innovate toward AGI proportionally faster as well, so I’m not sure what benefit there is. On the contrary, giving AI ready-made access to our neurons seems detrimental.
Even if I agree that such an augment would be very interesting. Such feelings though are why the accelerating march toward AGI seems inevitable.
It would give you very clean training data, assuming a very high resolution neural link with low electrical noise.
You would have directly your X and Ys to regress between. (X = input into a human brain subsystem, Y = calculated output)
Can directly train AI models to mimic this if it’s helpful for AGI, can work on ‘interpretability’ that might give use the insight to understand how the brain processes data and what it’s actual algorithm is.
The more I see of AI, the more I think we need something like Neuralink to make advances as swiftly as possible. Humans are already aligned-enough to human values (albeit not, in general, to one another), and if we can augment human intelligence fast enough, we might be able to solve alignment before everything goes bad. But that doesn’t seem to be the cool thing to work on in the tech world nowadays.
That was my goal ten years ago, then my timelines got too short. Bio tech is just so slow to push forward compared to computer tech.
Ignoring that physical advancements are harder than digital ones—inserting probes into our brains even more so given the medical and regulatory hurdles—that would also augment our capacity innovate toward AGI proportionally faster as well, so I’m not sure what benefit there is. On the contrary, giving AI ready-made access to our neurons seems detrimental.
Even if I agree that such an augment would be very interesting. Such feelings though are why the accelerating march toward AGI seems inevitable.
It would give you very clean training data, assuming a very high resolution neural link with low electrical noise.
You would have directly your X and Ys to regress between. (X = input into a human brain subsystem, Y = calculated output)
Can directly train AI models to mimic this if it’s helpful for AGI, can work on ‘interpretability’ that might give use the insight to understand how the brain processes data and what it’s actual algorithm is.