I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this.
The Office of the Director of National Intelligence wrote a report about this question that was declassified last year. They use the abbreviation CAI for “commercially accessible data”.
“2.5. (U) Counter-Intelligence Risks in CAI. There is also a growing recognition that CAI, as a generally available resource, offers intelligence benefits to our adversaries, some of which may create counter-intelligence risk for the IC. For example, the January 2021 CSIS report cited above also urges the IC to “test and demonstrate the utility of OSINT and AI in analysis on critical threats, such as the adversary use of AI-enabled capabilities in disinformation and influence operations.”
Last month there was a political fight about warrant requirements when the US intelligence agencies use commercially brought data, that was likely partly caused by the concerns from that report.
I think the tension is what does it even mean to be targeted by a government.
Here, I mean that you are doing something that’s of interest to Chinese intelligence services. People who want to lobby for Chinese AI policy probably fall under that class.
I’m not sure to what extent people working at top AI labs might be blackmailed by the Chinese government to do things like give them their source code.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
This is a narrow objection to the IMO hyperbolic focus on government assault risks.
Whether or not you face government assault risks depends on what you do. Most people don’t face government assault risks. Some people engage in work or activism that results in them having government assault risks.
The Chinese government has strategic goals and most people are unimportant to those. Some people however work on topics like AI policy in which the Chinese government has an interest.
I don’t have confidence in my models of how coherent and competent governments are at getting and using data like this. The primary buyers of location data are advertisers and business planners looking for statistical correlations for targeting and decisions. This is creepy, but not directly comparable to “targeted by the Chinese government”.
My competing theories of “targeted by the Chinese government” threats are:
they’re hyper-competent and have employee/agents at most carriers who will exfiltrate needed data, so stopping the explicit sale just means it’s less visible.
they’re as bureaucratic and confused as everything else, so even if they know where you are, they’re unable to really do much with it.
I think the tension is what does it even mean to be targeted by a government.
The Office of the Director of National Intelligence wrote a report about this question that was declassified last year. They use the abbreviation CAI for “commercially accessible data”.
“2.5. (U) Counter-Intelligence Risks in CAI. There is also a growing recognition that CAI, as a generally available resource, offers intelligence benefits to our adversaries, some of which may create counter-intelligence risk for the IC. For example, the January 2021 CSIS report cited above also urges the IC to “test and demonstrate the utility of OSINT and AI in analysis on critical threats, such as the adversary use of AI-enabled capabilities in disinformation and influence operations.”
Last month there was a political fight about warrant requirements when the US intelligence agencies use commercially brought data, that was likely partly caused by the concerns from that report.
Here, I mean that you are doing something that’s of interest to Chinese intelligence services. People who want to lobby for Chinese AI policy probably fall under that class.
I’m not sure to what extent people working at top AI labs might be blackmailed by the Chinese government to do things like give them their source code.
[note: I suspect we mostly agree on the impropriety of open selling and dissemination of this data. This is a narrow objection to the IMO hyperbolic focus on government assault risks. ]
I’m unhappy with the phrasing of “targeted by the Chinese government”, which IMO implies violence or other real-world interventions when the major threats are “adversary use of AI-enabled capabilities in disinformation and influence operations.” Thanks for mentioning blackmail—that IS a risk I put in the first category, and presumably becomes more possible with phone location data. I don’t know how much it matters, but there is probably a margin where it does.
I don’t disagree that this purchasable data makes advertising much more effective (in fact, I worked at a company based on this for some time). I only mean to say that “targeting” in the sense of disinformation campaigns is a very different level of threat from “targeting” of individuals for government ops.
Whether or not you face government assault risks depends on what you do. Most people don’t face government assault risks. Some people engage in work or activism that results in them having government assault risks.
The Chinese government has strategic goals and most people are unimportant to those. Some people however work on topics like AI policy in which the Chinese government has an interest.