By your analogy, one of the main criticism of doing MIRI-style AGI safety research now is that it’s like 10th century Chinese philosophers doing Saturn V safety research based on what they knew about fire arrows.
This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI alignment research is less like this and more like Chinese mathematicians researching calculus and gravity, which is still difficult, but much easier than attempting to do safety engineering on the Saturn V far in advance :-)
This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI alignment research is less like this and more like Chinese mathematicians researching calculus and gravity, which is still difficult, but much easier than attempting to do safety engineering on the Saturn V far in advance :-)
Don’t kid yourself in the effort to seem humble: it’s an entirely feasible research effort.