I was sure the title used to say “safety problem”, not “intent alignment problem”. Did I imagine it?
[EDITED to add:] No, I didn’t. Right now one can see the original wording over at https://ai-alignment.com/ but since I wrote the above, the version here has had the title silently changed.
(By the way, I didn’t say anything about existential risk; there are dangers that fall short of existential risk.)
I was sure the title used to say “safety problem”, not “intent alignment problem”. Did I imagine it?
[EDITED to add:] No, I didn’t. Right now one can see the original wording over at https://ai-alignment.com/ but since I wrote the above, the version here has had the title silently changed.
(By the way, I didn’t say anything about existential risk; there are dangers that fall short of existential risk.)