Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we’d end up with a limited superintelligence who maybe isn’t completely Friendly, but also isn’t completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they’d also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn’t just kill us. A crazy hybrid might be that.
If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that’s only if they’re working on alignment, and not just improving AI.
If the Borg Collective becomes the next Facebook, then at least we’re not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.
Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we’d end up with a limited superintelligence who maybe isn’t completely Friendly, but also isn’t completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they’d also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn’t just kill us. A crazy hybrid might be that.
If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that’s only if they’re working on alignment, and not just improving AI.
If the Borg Collective becomes the next Facebook, then at least we’re not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.