To me the title implies that bad translation could be dangerous whereas I think the actual intention is that machine translation involves difficulties that parallel those of AGI that lead to danger.
I was sure the title used to say “safety problem”, not “intent alignment problem”. Did I imagine it?
[EDITED to add:] No, I didn’t. Right now one can see the original wording over at https://ai-alignment.com/ but since I wrote the above, the version here has had the title silently changed.
(By the way, I didn’t say anything about existential risk; there are dangers that fall short of existential risk.)
Originally it was “as an alignment problem” but this has the problem that it also refers to “aligning” unaligned datasets. The new way is bulkier but probably better overall.
To me the title implies that bad translation could be dangerous whereas I think the actual intention is that machine translation involves difficulties that parallel those of AGI that lead to danger.
Not all intent alignment problems involve existential risk.
I was sure the title used to say “safety problem”, not “intent alignment problem”. Did I imagine it?
[EDITED to add:] No, I didn’t. Right now one can see the original wording over at https://ai-alignment.com/ but since I wrote the above, the version here has had the title silently changed.
(By the way, I didn’t say anything about existential risk; there are dangers that fall short of existential risk.)
Good point, changed.
Originally it was “as an alignment problem” but this has the problem that it also refers to “aligning” unaligned datasets. The new way is bulkier but probably better overall.