I think Eliezer would object to calling anything like that “related to AI safety”, because it might imply that working on those is relevant to THE AI Safety, which, he is convinced, has no hope of success at this point, and anything weaker just give the false sense of security “but we/people are working on it!”. See also his (rather dated and more optimistic) Rocket Alignment post.
I think Eliezer would object to calling anything like that “related to AI safety”, because it might imply that working on those is relevant to THE AI Safety, which, he is convinced, has no hope of success at this point, and anything weaker just give the false sense of security “but we/people are working on it!”. See also his (rather dated and more optimistic) Rocket Alignment post.