I agree that the problem of the alignment of human values with artificial intelligence values is in practice unsolvable. Except in a very particular case, that is when the artificial intelligence and the human are the same thing. That is, to stop developing AI on dry hardware and just develop it in wet brains, for what Elon Musk’s Neuralink approach could be a step in the right direction.
Intelligence amplification (broadly construed) seems useful to me, because it might make smarter humans who can more efficiently solve the alignment problem. It doesn’t obviate the need for a solution, because you’ll still have people advancing AI in a world full of human intelligence amplification techniques, and the AI will still surpass human intelligence.
(Fundamentally, because ‘AI’ is a much less constrained part of the solution space than ‘enhanced human’; in searching for smarter ‘AI’, any design is on the table, whereas enhanced humans are exploring a very specific part of the tree of mind designs, and need to be perpetually cautious about any radical changes that might cause bad value drift.)
But when I think of ‘intelligence amplification’, I’m mostly thinking of tweaking genes or biochemistry a bit to raise IQ. ‘Develop AI inside human brains’ seems far less promising to me, especially if people at Neuralink think that having the AI inside of you somehow ensures that it’s aligned with the rest of you. For a proof of concept that ‘X is a part of me’ doesn’t magically transmit alignment, consider pathogens, or cancers.
I agree that the problem of the alignment of human values with artificial intelligence values is in practice unsolvable. Except in a very particular case, that is when the artificial intelligence and the human are the same thing. That is, to stop developing AI on dry hardware and just develop it in wet brains, for what Elon Musk’s Neuralink approach could be a step in the right direction.
Intelligence amplification (broadly construed) seems useful to me, because it might make smarter humans who can more efficiently solve the alignment problem. It doesn’t obviate the need for a solution, because you’ll still have people advancing AI in a world full of human intelligence amplification techniques, and the AI will still surpass human intelligence.
(Fundamentally, because ‘AI’ is a much less constrained part of the solution space than ‘enhanced human’; in searching for smarter ‘AI’, any design is on the table, whereas enhanced humans are exploring a very specific part of the tree of mind designs, and need to be perpetually cautious about any radical changes that might cause bad value drift.)
But when I think of ‘intelligence amplification’, I’m mostly thinking of tweaking genes or biochemistry a bit to raise IQ. ‘Develop AI inside human brains’ seems far less promising to me, especially if people at Neuralink think that having the AI inside of you somehow ensures that it’s aligned with the rest of you. For a proof of concept that ‘X is a part of me’ doesn’t magically transmit alignment, consider pathogens, or cancers.