I’m trying to think of ideas here. As a recap of what I think the post says:
China is still very much in the race
There is very little AI safety activities there, possibly due to a lack of reception of EA ideas.
^let me know if I am understanding correctly.
Some ideas/thoughts:
it seems to me that many in AI Safety or in other specific “cause areas” are already dissociating from EA, though not much from LW.
I am not sure if we should expect mainstream adoption of AI Safety ideas (its not really mainstream in the west, nor is EA, or LW).
It seems like there are some communication issues (the Org looking for funding) that this post can help with
to me it is super interesting to hear that there is less resistance to the ideas of AI Safety in China. Though I don’t want to fully believe that yet. Though, im not sure that the AI Safety field is people bottlenecked right now, it seems we currently don’t know what to do with more people really.
still, it’s clear that we need to have a strong field in China. Perhaps less alignment focused, and more governance? Though my impression from your post is that governance is less doable, but maybe I am misunderstanding.
I might have more thoughts later on.
(for context, I am recently involved in governance work for the EU AI Act)
I’m saying that AI Safety really can’t take off as a field in China without a cadre of well-paid people working on technical alignment. If people were working on, say, interpretability work here for a high wage ($50,000-$100,000 is considered a lot for a PHD in the field), it would gain prestige and people would take it seriously. Otherwise it just sounds like LARP. That’s how you do field building in China. You don’t go around making speeches, you hire people.
My gut feeling is that hiring 10 expats in Beijing to do “field building” gets less field building done than hiring 10 college grads in Shanghai to do technical alignment work.
My experience is that the Chinese(I am one) will disassociate with the “strange part” of EA, such as mental uploading or minimization of suffering or even life extension: the basic conservative argument for species extension and life as human beings is what works.
CN is fundamentally conservative in that sense. The complications are not many and largely revolve around:
How is this good for the Party.
How is this good for “keeping things in harmony/close to nature/Taoism”
Emotional appeals for safety and effort. The story of effort = worth is strong there, and devaluation of human effort leads to reactions of disgust.
Mind uploading and life extension are far easier sells than ending factory farming, in my experience. It’s not the tech part we find cringe, but the altruism part. The Chinese soul wants to make money, be cool, and work in a cool field. People trying to sell me altruism are probably part of a doomsday cult (e.g. Catholicism).
I think the lack of altruism part comes from the desire to compete and be superior(via effort). There’s no vast desire to have altruism as the West understands it. How would you be better than others?
The upside is that “specism” is a norm. There would be zero worry about wanting to give AI “fair rights.” I do think there is some consideration for “humankind”, but individual rights(for humans or animals) are strictly a “nice to have, unnecessary and often harmful.”
I’m trying to think of ideas here. As a recap of what I think the post says:
China is still very much in the race
There is very little AI safety activities there, possibly due to a lack of reception of EA ideas.
^let me know if I am understanding correctly.
Some ideas/thoughts:
it seems to me that many in AI Safety or in other specific “cause areas” are already dissociating from EA, though not much from LW.
I am not sure if we should expect mainstream adoption of AI Safety ideas (its not really mainstream in the west, nor is EA, or LW).
It seems like there are some communication issues (the Org looking for funding) that this post can help with
to me it is super interesting to hear that there is less resistance to the ideas of AI Safety in China. Though I don’t want to fully believe that yet. Though, im not sure that the AI Safety field is people bottlenecked right now, it seems we currently don’t know what to do with more people really.
still, it’s clear that we need to have a strong field in China. Perhaps less alignment focused, and more governance? Though my impression from your post is that governance is less doable, but maybe I am misunderstanding.
I might have more thoughts later on.
(for context, I am recently involved in governance work for the EU AI Act)
I’m saying that AI Safety really can’t take off as a field in China without a cadre of well-paid people working on technical alignment. If people were working on, say, interpretability work here for a high wage ($50,000-$100,000 is considered a lot for a PHD in the field), it would gain prestige and people would take it seriously. Otherwise it just sounds like LARP. That’s how you do field building in China. You don’t go around making speeches, you hire people.
My gut feeling is that hiring 10 expats in Beijing to do “field building” gets less field building done than hiring 10 college grads in Shanghai to do technical alignment work.
My experience is that the Chinese(I am one) will disassociate with the “strange part” of EA, such as mental uploading or minimization of suffering or even life extension: the basic conservative argument for species extension and life as human beings is what works.
CN is fundamentally conservative in that sense. The complications are not many and largely revolve around:
How is this good for the Party.
How is this good for “keeping things in harmony/close to nature/Taoism”
Emotional appeals for safety and effort. The story of effort = worth is strong there, and devaluation of human effort leads to reactions of disgust.
Mind uploading and life extension are far easier sells than ending factory farming, in my experience. It’s not the tech part we find cringe, but the altruism part. The Chinese soul wants to make money, be cool, and work in a cool field. People trying to sell me altruism are probably part of a doomsday cult (e.g. Catholicism).
I think the lack of altruism part comes from the desire to compete and be superior(via effort). There’s no vast desire to have altruism as the West understands it. How would you be better than others?
The upside is that “specism” is a norm. There would be zero worry about wanting to give AI “fair rights.” I do think there is some consideration for “humankind”, but individual rights(for humans or animals) are strictly a “nice to have, unnecessary and often harmful.”