That likely includes either directly or indirectly the Chinese government.
What does the US Congress do to protect spying by China? Of course, banning tik tok instead of actually protecting the data of US citizens.
If you have thread models that the Chinese government might target you, assume that they know where your phone is and shut it of when going somewhere you don’t want the Chinese government (or for that matter anyone with a decent amount of capital) to know.
I feel like this comparison of the enforcement here with the TikTok ban is not directed at the actual primary concern about TikTok, which is content curation by its opaque algorithm, not data privacy per se.
By analogy, if a Soviet state-owned enterprise in 1980 wanted to purchase NBC, would/should we have allowed that? If your answer is “no,” keeping in mind how many people get their news via TikTok, why would/should we allow what effectively seems to be a CCP-(owned or heavily influenced) company to control what content our people see?
Politico wrote, “Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users.” The concern that TikTok supposedly is spyware is frequently made in discussions about why it should be banned.
If the main issue is content moderation decisions, the best way to deal with it would be to legislate transparency around content moderation decisions and require TikTok to outsource the moderation decisions to some US contractor.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this.
The Office of the Director of National Intelligence wrote a report about this question that was declassified last year. They use the abbreviation CAI for “commercially accessible data”.
“2.5. (U) Counter-Intelligence Risks in CAI. There is also a growing recognition that CAI, as a generally available resource, offers intelligence benefits to our adversaries, some of which may create counter-intelligence risk for the IC. For example, the January 2021 CSIS report cited above also urges the IC to “test and demonstrate the utility of OSINT and AI in analysis on critical threats, such as the adversary use of AI-enabled capabilities in disinformation and influence operations.”
Last month there was a political fight about warrant requirements when the US intelligence agencies use commercially brought data, that was likely partly caused by the concerns from that report.
I think the tension is what does it even mean to be targeted by a government.
Here, I mean that you are doing something that’s of interest to Chinese intelligence services. People who want to lobby for Chinese AI policy probably fall under that class.
I’m not sure to what extent people working at top AI labs might be blackmailed by the Chinese government to do things like give them their source code.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
This is a narrow objection to the IMO hyperbolic focus on government assault risks.
Whether or not you face government assault risks depends on what you do. Most people don’t face government assault risks. Some people engage in work or activism that results in them having government assault risks.
The Chinese government has strategic goals and most people are unimportant to those. Some people however work on topics like AI policy in which the Chinese government has an interest.
The FDC just fined US phone carriers for sharing the location data of US customers to anyone willing to buy them. The fines don’t seem to be high enough to deter this kind of behavior.
That likely includes either directly or indirectly the Chinese government.
What does the US Congress do to protect spying by China? Of course, banning tik tok instead of actually protecting the data of US citizens.
If you have thread models that the Chinese government might target you, assume that they know where your phone is and shut it of when going somewhere you don’t want the Chinese government (or for that matter anyone with a decent amount of capital) to know.
I feel like this comparison of the enforcement here with the TikTok ban is not directed at the actual primary concern about TikTok, which is content curation by its opaque algorithm, not data privacy per se.
By analogy, if a Soviet state-owned enterprise in 1980 wanted to purchase NBC, would/should we have allowed that? If your answer is “no,” keeping in mind how many people get their news via TikTok, why would/should we allow what effectively seems to be a CCP-(owned or heavily influenced) company to control what content our people see?
Politico wrote, “Perhaps the most pressing concern is around the Chinese government’s potential access to troves of data from TikTok’s millions of users.” The concern that TikTok supposedly is spyware is frequently made in discussions about why it should be banned.
If the main issue is content moderation decisions, the best way to deal with it would be to legislate transparency around content moderation decisions and require TikTok to outsource the moderation decisions to some US contractor.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
The Office of the Director of National Intelligence wrote a report about this question that was declassified last year. They use the abbreviation CAI for “commercially accessible data”.
“2.5. (U) Counter-Intelligence Risks in CAI. There is also a growing recognition that CAI, as a generally available resource, offers intelligence benefits to our adversaries, some of which may create counter-intelligence risk for the IC. For example, the January 2021 CSIS report cited above also urges the IC to “test and demonstrate the utility of OSINT and AI in analysis on critical threats, such as the adversary use of AI-enabled capabilities in disinformation and influence operations.”
Last month there was a political fight about warrant requirements when the US intelligence agencies use commercially brought data, that was likely partly caused by the concerns from that report.
Here, I mean that you are doing something that’s of interest to Chinese intelligence services. People who want to lobby for Chinese AI policy probably fall under that class.
I’m not sure to what extent people working at top AI labs might be blackmailed by the Chinese government to do things like give them their source code.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
Whether or not you face government assault risks depends on what you do. Most people don’t face government assault risks. Some people engage in work or activism that results in them having government assault risks.
The Chinese government has strategic goals and most people are unimportant to those. Some people however work on topics like AI policy in which the Chinese government has an interest.
Leave phones elsewhere, remove batteries, or faraday cage them if you’re concerned about state-level actors:
https://slate.com/technology/2013/07/nsa-can-reportedly-track-cellphones-even-when-they-re-turned-off.html