This scenario certainly seems plausible, if overly benign and slow. What I find missing is a seamless HMI (human/machine integration), Neuralinks-style, only safer, better and more advanced. Basically, AI-augmented human cognition and communication. Note that the weakest link in human cognition is low introspection and extremely poor, slow and unreliable communication channel (language). Bypassing the need to communicate with words will be the real revolution.
I’m not very bullish on HMI. I think the progress humanity makes in understanding the brain is extremely slow and because it’s so hard to do research on the brain, I don’t expect us to get much faster.
Basically, I expect humanity to build AGI way before we are even close to understanding the brain.
Well maybe. I still think it’s easier to build AGI than to understand the brain, so even the smartest narrow AIs might not be able to build a consistent theory before someone else builds AGI.
This scenario certainly seems plausible, if overly benign and slow. What I find missing is a seamless HMI (human/machine integration), Neuralinks-style, only safer, better and more advanced. Basically, AI-augmented human cognition and communication. Note that the weakest link in human cognition is low introspection and extremely poor, slow and unreliable communication channel (language). Bypassing the need to communicate with words will be the real revolution.
I’m not very bullish on HMI. I think the progress humanity makes in understanding the brain is extremely slow and because it’s so hard to do research on the brain, I don’t expect us to get much faster.
Basically, I expect humanity to build AGI way before we are even close to understanding the brain.
Well, narrow superintelligent AIs might help us understand the brain before then.
Well maybe. I still think it’s easier to build AGI than to understand the brain, so even the smartest narrow AIs might not be able to build a consistent theory before someone else builds AGI.