One significant problem is that differential privacy requires trusting that the implementation is free from bugs, both intentional and unintentional ones. In some cases you can monitor the network packets (though who monitors all the network packets all the time?), but in many cases you can’t. That’s especially an issue with hardware devices that communicate using encryption.
I generally would trust that Firefox isn’t going to have an intentional bug in their telemetry, and I don’t think that Google would either (they have too much to lose from the bad publicity), but what about all of the miscellaneous ad companies? And anyone can have an unintentional bug in their implementation.
The companies involved have, as a general group, already shown that they will act maliciously, so I can’t trust them when they say they aren’t being malicious.
The approach the major browsers (except Firefox) have been taking is to provide new APIs that allow ad-related functionality without individual-level tracking (and then try to block cross-site tracking). Examples:
This seems like a good place to put it, to me. Users choose their browsers, and browsers are generally open source. This still does not do anything about same-site tracking, but in that case users are choosing which sites they interact with. Also, while this is being built with cross site tracking use cases in mind, I would like to see it built in a way where individual sites can also use it to demonstrate that their data collection is private.
The companies involved have, as a general group, already shown that they will act maliciously
I don’t feel/think the same. Do you consider tracking generally to be the ‘malicious’ activity of the companies/organizations in the “general group”?
Assuming we’re even thinking about the same things, which might not be true, I’m struggling to think of what activity was intentionally intended to harm anyone. I’m much more sympathetic to the idea that the relevant people are and were negligent (e.g. in pro-actively protecting people’s privacy).
One significant problem is that differential privacy requires trusting that the implementation is free from bugs, both intentional and unintentional ones. In some cases you can monitor the network packets (though who monitors all the network packets all the time?), but in many cases you can’t. That’s especially an issue with hardware devices that communicate using encryption.
I generally would trust that Firefox isn’t going to have an intentional bug in their telemetry, and I don’t think that Google would either (they have too much to lose from the bad publicity), but what about all of the miscellaneous ad companies? And anyone can have an unintentional bug in their implementation.
The companies involved have, as a general group, already shown that they will act maliciously, so I can’t trust them when they say they aren’t being malicious.
The approach the major browsers (except Firefox) have been taking is to provide new APIs that allow ad-related functionality without individual-level tracking (and then try to block cross-site tracking). Examples:
Apple/Safari has Privacy-Preserving Ad Click Attribution to support conversion tracking
Microsoft/Edge has Parakeet to support remarketing
Google/Chrome has the Trust Token API for Spam/Fraud/DoS, and various other Privacy Sandbox Proposals
This seems like a good place to put it, to me. Users choose their browsers, and browsers are generally open source. This still does not do anything about same-site tracking, but in that case users are choosing which sites they interact with. Also, while this is being built with cross site tracking use cases in mind, I would like to see it built in a way where individual sites can also use it to demonstrate that their data collection is private.
I don’t feel/think the same. Do you consider tracking generally to be the ‘malicious’ activity of the companies/organizations in the “general group”?
Assuming we’re even thinking about the same things, which might not be true, I’m struggling to think of what activity was intentionally intended to harm anyone. I’m much more sympathetic to the idea that the relevant people are and were negligent (e.g. in pro-actively protecting people’s privacy).