Is there a good case for the usefulness (or uselessness) of brain-computer interfaces in AI alignment (à la Neuralink etc.)? I’ve searched around a bit, but there seems to be no write-up for the path to making AI go well using BCIs.
Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we’d end up with a limited superintelligence who maybe isn’t completely Friendly, but also isn’t completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they’d also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn’t just kill us. A crazy hybrid might be that.
If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that’s only if they’re working on alignment, and not just improving AI.
If the Borg Collective becomes the next Facebook, then at least we’re not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.
I don’t know of any writeup, and I do think it would be great for someone to make one. I’ve definitely discussed this for many hours with people over the years.
Some related tags that might have some stuff in this space covered:
Is there a good case for the usefulness (or uselessness) of brain-computer interfaces in AI alignment (à la Neuralink etc.)? I’ve searched around a bit, but there seems to be no write-up for the path to making AI go well using BCIs.
Edit: Post about this is up.
Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we’d end up with a limited superintelligence who maybe isn’t completely Friendly, but also isn’t completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they’d also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn’t just kill us. A crazy hybrid might be that.
If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that’s only if they’re working on alignment, and not just improving AI.
If the Borg Collective becomes the next Facebook, then at least we’re not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.
Well, it might make it easier for someone to steal your credit card info if you’re wearing one of these headsets.
I don’t know of any writeup, and I do think it would be great for someone to make one. I’ve definitely discussed this for many hours with people over the years.
Some related tags that might have some stuff in this space covered:
https://www.lesswrong.com/tag/neuromorphic-ai
https://www.lesswrong.com/tag/brain-computer-interfaces
But overall doesn’t look like there are any posts that really cover the AI Alignment angle.