this could have been noise, but i noticed an increase in text fearing spies, in the text i’ve seen in the past few days[1]. i actually don’t know how much this concern is shared by LW users, so i think it might be worth writing that, in my view:
(AFAIK) both governments[2] are currently reacting inadequately to unaligned optimization risk. as a starting prior, there’s not strong reason to fear more one government {observing/spying on} ML conferences/gatherings over the other, absent evidence that one or the other will start taking unaligned optimization risks very seriously, or that one or the other is prone to race towards ASI.
(AFAIK, we have more evidence that the U.S. government may try to race, e.g. this, but i could have easily missed evidence as i don’t usually focus on this)
tangentially, a more-pervasively-authoritarian government could be better situated to prevent unilaterally-caused risks (cf a similar argument in ‘The Vulnerable World Hypothesis’), if it sought to. (edit: andif the AI labs closest to causing those risks were within its borders, which they are not atm)
this argument feels sad (or reflective of a sad world?) to me to be clear, but it seems true in this case
that said i don’t typically focus on governance or international-AI-politics, so have not put much thought into this.
this could have been noise, but i noticed an increase in text fearing spies, in the text i’ve seen in the past few days[1]. i actually don’t know how much this concern is shared by LW users, so i think it might be worth writing that, in my view:
(AFAIK) both governments[2] are currently reacting inadequately to unaligned optimization risk. as a starting prior, there’s not strong reason to fear more one government {observing/spying on} ML conferences/gatherings over the other, absent evidence that one or the other will start taking unaligned optimization risks very seriously, or that one or the other is prone to race towards ASI.
(AFAIK, we have more evidence that the U.S. government may try to race, e.g. this, but i could have easily missed evidence as i don’t usually focus on this)
tangentially, a more-pervasively-authoritarian government could be better situated to prevent unilaterally-caused risks (cf a similar argument in ‘The Vulnerable World Hypothesis’), if it sought to. (edit: andif the AI labs closest to causing those risks were within its borders, which they are not atm)
this argument feels sad (or reflective of a sad world?) to me to be clear, but it seems true in this case
that said i don’t typically focus on governance or international-AI-politics, so have not put much thought into this.
examples: yesterday, saw this twitter/x post (via this quoting post)
today, opened lesswrong and saw this shortform about two uses of the word spy and this shortform about how it’s hard to have evidence against the existence of manhattan projects
this was more than usual, and i sense that it’s part of a pattern
of those of US/china