My point, clearly not well expressed, is that the main issue why the AI alignment has to be figured out in advance is not even mentioned in the OP’s dialogue:
We think the most important thing to do next is to advance our understanding of rocket trajectories until we have a better, deeper understanding of what we’ve started calling the “rocket alignment problem” There are other safety problems, but this rocket alignment problem will probably take the most total time to work on, so it’s the most urgent.
… why? So what if this problem remains after the other problems are solved and the rockets are flying every which way? I have tried to answer that, since Eliezer hasn’t in this post, despite this being the main impetus of MIRI’s work.
I feel like the post is trying to convince the reader that AI alignment needs to be solved AT ALL. You can worry about arguing when it needs to be solved after the other person in convinced there is a problem to solve in the first place.
My point, clearly not well expressed, is that the main issue why the AI alignment has to be figured out in advance is not even mentioned in the OP’s dialogue:
… why? So what if this problem remains after the other problems are solved and the rockets are flying every which way? I have tried to answer that, since Eliezer hasn’t in this post, despite this being the main impetus of MIRI’s work.
I feel like the post is trying to convince the reader that AI alignment needs to be solved AT ALL. You can worry about arguing when it needs to be solved after the other person in convinced there is a problem to solve in the first place.