Hmm, so is your argument basically “human-level intelligence is so hard, machines will not get there in the foreseeable future, so there is no need to worry about AI alignment”? Or is there something else?
No I don’t think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don’t need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.
Hmm, so is your argument basically “human-level intelligence is so hard, machines will not get there in the foreseeable future, so there is no need to worry about AI alignment”? Or is there something else?
No I don’t think it is. AI systems can influence decisions even in their fairly primitive state, and we must think carefully how we use them. But my position is that we don’t need to worry about these machines developing extremely sophisticated behaviours any time soon, which keeps the alignment problem somewhat in check.