I think the best way to deal with AI alignment is to create AI not just as a separate entity, but instead an extension and augmentation of ourselves. We are much better at using AI in narrow contexts than in real-world AGI scenarios, and we still have time to think about this before willy-nilly making autonomous agents. If humans can use AI and their own smarts to create functional brain-computer interfaces, the problem of aligned AI may not become a problem at all. Because the Artificial Intelligence is just an extension of yourself, of course it will be aligned with you—it is you! What I mean is that as humans become better at interfacing with technology the line between AI and human blurs.
You stick wires into a human brain. You connect it up to a computer running a deep neural network. You optimize this network using gradient decent to maximize some objective.
To me, it is not obvious why the neural network copies the values out of the human brain. After all, figuring out human values even given an uploaded mind is still an unsolved problem. You could get an UFAI with a meat robot. You could get an utter mess, thrashing wildly and incapable of any coherent thought. Evolution did not design the human brain to be easily upgradable. Most possible arrangements of components are not intelligences. While there is likely to be some way to upgrade humans and preserve our values, I’m not sure how to find it without a lot of trial and error. Most potential changes are not improvements.
One major subfield within AI is understanding how the human brain works and effectively replicating it (while also making it more efficient with available technologies). I agree that we can’t just stick one end of a wire into a brain and another into a machine learning algorithm, they certainly aren’t compatible. But the Machine Learning and AI technologies we have today allow us to gain a better understanding of the human brain and how it works. My belief is that eventually we come to understand why humans are, to our knowledge, the greatest learning agents, and will come to identify the reasons for our limitations that will be eliminated through our technology.
The only reasonable solution is to merge with the technology, or risk becoming obsolete. However, I believe this will become obvious as we approach “all-powerful” AGI, which will almost certainly come about by trying to replicate the human brain using technology, and due to their similarities in structure, and the fact that we have to understand the brain to build a brain, linking the two actually becomes trivial.
I think the best way to deal with AI alignment is to create AI not just as a separate entity, but instead an extension and augmentation of ourselves. We are much better at using AI in narrow contexts than in real-world AGI scenarios, and we still have time to think about this before willy-nilly making autonomous agents. If humans can use AI and their own smarts to create functional brain-computer interfaces, the problem of aligned AI may not become a problem at all. Because the Artificial Intelligence is just an extension of yourself, of course it will be aligned with you—it is you! What I mean is that as humans become better at interfacing with technology the line between AI and human blurs.
You stick wires into a human brain. You connect it up to a computer running a deep neural network. You optimize this network using gradient decent to maximize some objective.
To me, it is not obvious why the neural network copies the values out of the human brain. After all, figuring out human values even given an uploaded mind is still an unsolved problem. You could get an UFAI with a meat robot. You could get an utter mess, thrashing wildly and incapable of any coherent thought. Evolution did not design the human brain to be easily upgradable. Most possible arrangements of components are not intelligences. While there is likely to be some way to upgrade humans and preserve our values, I’m not sure how to find it without a lot of trial and error. Most potential changes are not improvements.
One major subfield within AI is understanding how the human brain works and effectively replicating it (while also making it more efficient with available technologies). I agree that we can’t just stick one end of a wire into a brain and another into a machine learning algorithm, they certainly aren’t compatible. But the Machine Learning and AI technologies we have today allow us to gain a better understanding of the human brain and how it works. My belief is that eventually we come to understand why humans are, to our knowledge, the greatest learning agents, and will come to identify the reasons for our limitations that will be eliminated through our technology.
The only reasonable solution is to merge with the technology, or risk becoming obsolete. However, I believe this will become obvious as we approach “all-powerful” AGI, which will almost certainly come about by trying to replicate the human brain using technology, and due to their similarities in structure, and the fact that we have to understand the brain to build a brain, linking the two actually becomes trivial.