Intelligence amplification (broadly construed) seems useful to me, because it might make smarter humans who can more efficiently solve the alignment problem. It doesn’t obviate the need for a solution, because you’ll still have people advancing AI in a world full of human intelligence amplification techniques, and the AI will still surpass human intelligence.
(Fundamentally, because ‘AI’ is a much less constrained part of the solution space than ‘enhanced human’; in searching for smarter ‘AI’, any design is on the table, whereas enhanced humans are exploring a very specific part of the tree of mind designs, and need to be perpetually cautious about any radical changes that might cause bad value drift.)
But when I think of ‘intelligence amplification’, I’m mostly thinking of tweaking genes or biochemistry a bit to raise IQ. ‘Develop AI inside human brains’ seems far less promising to me, especially if people at Neuralink think that having the AI inside of you somehow ensures that it’s aligned with the rest of you. For a proof of concept that ‘X is a part of me’ doesn’t magically transmit alignment, consider pathogens, or cancers.
Intelligence amplification (broadly construed) seems useful to me, because it might make smarter humans who can more efficiently solve the alignment problem. It doesn’t obviate the need for a solution, because you’ll still have people advancing AI in a world full of human intelligence amplification techniques, and the AI will still surpass human intelligence.
(Fundamentally, because ‘AI’ is a much less constrained part of the solution space than ‘enhanced human’; in searching for smarter ‘AI’, any design is on the table, whereas enhanced humans are exploring a very specific part of the tree of mind designs, and need to be perpetually cautious about any radical changes that might cause bad value drift.)
But when I think of ‘intelligence amplification’, I’m mostly thinking of tweaking genes or biochemistry a bit to raise IQ. ‘Develop AI inside human brains’ seems far less promising to me, especially if people at Neuralink think that having the AI inside of you somehow ensures that it’s aligned with the rest of you. For a proof of concept that ‘X is a part of me’ doesn’t magically transmit alignment, consider pathogens, or cancers.