Couldn’t the money spent on AI safety research be better spend on, say, AI research?
There’s something like 100 times as much funding for AI research as there is for AI safety research. In general, it seems like it would be weird to have only 1% of the effort in a project spent on making sure the project is doing the thing that it should be doing.
My proposal is that we should stop doing AI in its simple definition of just improving the decision-making capabilities of systems. […] With civil engineering, we don’t call it “building bridges that don’t fall down” — we just call it “building bridges.” Of course we don’t want them to fall down. And we should think the same way about AI: of course AI systems should be designed so that their actions are well-aligned with what human beings want. But it’s a difficult unsolved problem that hasn’t been part of the research agenda up to now.
There’s something like 100 times as much funding for AI research as there is for AI safety research. In general, it seems like it would be weird to have only 1% of the effort in a project spent on making sure the project is doing the thing that it should be doing.
For this specific question, I like Stuart Russell’s approach: