I feel like the post is trying to convince the reader that AI alignment needs to be solved AT ALL. You can worry about arguing when it needs to be solved after the other person in convinced there is a problem to solve in the first place.
I feel like the post is trying to convince the reader that AI alignment needs to be solved AT ALL. You can worry about arguing when it needs to be solved after the other person in convinced there is a problem to solve in the first place.