I’ve talked with someone in EA Hong Kong who follows the progress of translation of effective altruism into Chinese language and culture; it is not trivial to do so optimally, and suboptimal translations carry substantial risks. Some excerpts mentioned in the linked post:
Doing mass outreach in another language creates irreversible “lock in” [...] China faces especially high risk of lock in, because you also face the risk of government censorship
Likewise, one of the possible translations of “existential risk” (生存危机) is very close to the the name of a computer game (生化危机), so doesn’t have the credibility one might want.
To do this well, we’ll need people who are both experts in the local culture and effective altruism in the West. We’ll also need people who are excellent writer and communicators in the new language.
Initial efforts to expand effective altruism into new languages should focus on making strong connections with a small number of people who have relevant expertise, via person-to-person outreach instead of mass media.
The arguments about EA being niche and difficult to communicate through low-fidelity means apply just as strongly to EA-style AI safety. However, the author also says:
If written materials are used, then it’s better to focus on books, academic articles and podcasts aimed at a niche audience.
I’ve talked with someone in EA Hong Kong who follows the progress of translation of effective altruism into Chinese language and culture; it is not trivial to do so optimally, and suboptimal translations carry substantial risks. Some excerpts mentioned in the linked post:
The arguments about EA being niche and difficult to communicate through low-fidelity means apply just as strongly to EA-style AI safety. However, the author also says: