Assuming we completely solved the problem of making AI do what its instructor tells it to do
This seems to either (a) assume the whole technical alignment problem out of existence, or (b) claim that paperclippers are just fine.
This seems to either (a) assume the whole technical alignment problem out of existence, or (b) claim that paperclippers are just fine.