Here’s some quick notes on how I think of LessWrong user data.
Any data that’s already public—reacts, tags, comments, etc—is fair game. It just seems nice to do some data science and help folks uncover interesting patterns here.
On the other side of the spectrum, me and the team generally never look at users’ up and downvotes, except in cases where there’s strong enough suspicion of malicious voting behavior (like targeted mass downvoting).
Then there’s stuff in the middle. Like, what if we tell a user “you and this user frequently upvote each other”? That particular example currently feels like it reveals too much private data. As another example, the other day me and a teammate had a discussion of whether, on the matchmaking page, we could show people recently active users who already checked you, to make it more likely you’d find a match. We tenatively postulated it would be fine to do this as long as seeing a name on your match page gave no more than like a 5:1 update about those people having checked you. We sketched out some algorithms to implement this, that would also be stable under repeated refreshing and similar. (We haven’t implemented the algorithm nor the feature yet.)
So my general take on features “in the middle” is for now to treat them on a case by case basis, with some principles like “try hard to avoid revealing anything that’s not already public, and if doing so, try to leave plausible deniability bounded by some number of leaked bits, only reveal metadata or aggregate data, reveal it only to one other or a smaller set of users, think about whether this is actually a piece of info that seems high or low stakes, and see if you can get away with just using data from people who opted in to revealing it”.
We tenatively postulated it would be fine to do this as long as seeing a name on your match page gave no more than like a 5:1 update about those people having checked you.
I would strongly advocate against this kind of thought; any such decision-making procedure relies on the assumption that you correctly figure out all the ways such information can be used, and that there isn’t a clever way an adversary can extract more information than you had thought. This is bound to fail—people come up with clever ways to extract more private information than anticipated all the time. For example:
We describe a class of attacks that can compromise the privacy of users’ Web-browsing histories. The attacks allow a malicious Web site to determine whether or not the user has recently visited some other, unrelated Web page. The malicious page can determine this information by measuring the time the user’s browser requires to perform certain operations. Since browsers perform various forms of caching, the time required for operations depends on the user’s browsing history; this paper shows that the resulting time variations convey enough information to compromise users’ privacy.
We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world’s largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy “sybil” nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary’s auxiliary information is small.
Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual’s home and workplace can both be deduced from a location trace, then the median size of the individual’s anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual’s home and work locations can both be deduced from the data.
Fill-in-the-bubble forms are widely used for surveys, election ballots, and standardized tests. In these and other scenarios, use of the forms comes with an implicit assumption that individuals’ bubble markings themselves are not identifying. This work challenges this assumption, demonstrating that fill-in-the-bubble forms could convey a respondent’s identity even in the absence of explicit identifying information. We develop methods to capture the unique features of a marked bubble and use machine learning to isolate characteristics indicative of its creator. Using surveys from more than ninety individuals, we apply these techniques and successfully reidentify individuals from markings alone with over 50% accuracy.
Those are some interesting papers, thanks for linking.
In the case at hand, I do disagree with your conclusion though.
In this situation, the most a user could find out is who checked them in dialogues. They wouldn’t be able to find any data about checks not concerning themselves.
If they happened to be a capable enough dev and were willing to go through the schleps to obtain that information, then, well… we’re a small team and the world is on fire, and I don’t think we should really be prioritising making Dialogue Matching robust to this kind of adversarial cyber threat for information of comparable scope and sensitivity! Folks with those resources could probably uncover all kinds of private vote data already, if they wanted to.
we’re a small team and the world is on fire, and I don’t think we should really be prioritising making Dialogue Matching robust to this kind of adversarial cyber threat for information of comparable scope and sensitivity!
I agree that it wouldn’t be a very good use of your resources. But there’s a simple solution for that—only use data that’s already public and users have consented to you using. (Or offer an explicit opt-in where that isn’t the case.)
I do agree that in this specific instance, there’s probably little harm in the information being revealed. But I generally also don’t think that that’s the site admin’s call to make, even if I happen to agree with that call in some particular instances. A user may have all kinds of reasons to want to keep some information about themselves private, some of those reasons/kinds of information being very idiosyncratic and hard to know in advance. The only way to respect every user’s preferences for privacy, even the unusual ones, is by letting them control what information is used and not make any of those calls on their behalf.
Hmm, most of these don’t really apply here? Like, it’s not like we are proposing to do anything complicated. We are just saying “throw in some kind of sample function with a 5:1 ratio, ensure you can’t get redraws”. I feel like I can just audit the whole trail of implications myself here (and like, substantially more than I am e.g. capable of auditing all of our libraries for security vulnerabilities, which are sadly reasonably frequent in the JS land we are occupying).
My point is less about the individual example than the overall decision algorithm. Even if you’re correct that in this specific instance, you can verify the whole trail of implications and be certain that nothing bad happens, a general policy of “figure it out on a case-by-case basis and only do it when it feels safe” means that you’re probably going to make a mistake eventually, given how easy it is to make a mistake in this domain.
I am not sure what the alternative is. What decision-making algorithm do you suggest for adopting new server-side libraries that might have security vulnerabilities? Or to switch existing ones? My only algorithm is “figure it out on a case-by-case basis and only do it when it feels safe”. What’s the alternative?
For site libraries, there is indeed no alternative since you have to use some libraries to get anything done, so there you do have to do it on a case-by-case basis. In the case of exposing user data, there is an alternative—limiting yourself to only public data. (See also my reply to jacobjacob.)
If “show people who have checked you” is a thing that would improve the experience, then I as a user would appreciate a checkbox “users who you have checked and have not checked you can see that you have checked them”. I, for one, would check such a checkbox.
(If others want this too, upvote @faul_sname’s comment as a vote! It would be easy to build, most of my uncertainty is in how it would change the experience)
One level of privacy is “the site admins have access to data, but don’t abuse it”. But there are other levels, too. Let’s take something I assume is private in this way, the identities of who has up/downvoted a post or comment.
Can someone e.g. inspect the browser page or similar, in order to identify these identities?
Can someone look through the Forum Magnum Github repo and based on the open source code find ways to identify these identities?
Can someone on GreaterWrong identify these identities?
For other things that seem private (e.g. PMs), are any of those vulnerable to stuff like the above?
None of the votes or PMs are vulnerable in that way (barring significant coding errors). Posts have an overall score field, but the power or user id of a vote on a post or comment can’t be queried except that user or an admin.
I just got a “New users interested in dialoguing with you (not a match yet)” notification and when I clicked on it the first thing I saw was that exactly one person in my Top Voted users list was marked as recently active in dialogue matching. I don’t vote much so my Top Voted users list is in fact an All Voted users list. This means that either the new user interested in dialoguing with me is the one guy who is conspicuously presented at the top of my page, or it’s some random that I’ve never interacted with and have no way of matching.
This is technically not a privacy violation because it could be some random, but I have to imagine this is leaking more bits of information than you intended it to (it’s way more than a 5:1 update), so I figured I’d report it as a bug unanticipated feature.
It further occurs to me that anyone who was dedicated to extracting information from the system could completely deanonymize their matches by setting a simple script to scrape https://www.lesswrong.com/dialogueMatching every minute or so and cross-referencing “new users interested” notifications with the moment someone shoots to the top of the “recently active in dialogue matching” list. It sounds like you don’t care about that kind of attack though so I guess I’m mentioning it for completeness.
The notifications for matches only go out on a weekly basis, so I don’t think timing it would work. Also, we don’t sort users who clicked you in any way differently than other users on your page, so you might have been checked by a person who you haven’t voted much on.
On data privacy
Here’s some quick notes on how I think of LessWrong user data.
Any data that’s already public—reacts, tags, comments, etc—is fair game. It just seems nice to do some data science and help folks uncover interesting patterns here.
On the other side of the spectrum, me and the team generally never look at users’ up and downvotes, except in cases where there’s strong enough suspicion of malicious voting behavior (like targeted mass downvoting).
Then there’s stuff in the middle. Like, what if we tell a user “you and this user frequently upvote each other”? That particular example currently feels like it reveals too much private data. As another example, the other day me and a teammate had a discussion of whether, on the matchmaking page, we could show people recently active users who already checked you, to make it more likely you’d find a match. We tenatively postulated it would be fine to do this as long as seeing a name on your match page gave no more than like a 5:1 update about those people having checked you. We sketched out some algorithms to implement this, that would also be stable under repeated refreshing and similar. (We haven’t implemented the algorithm nor the feature yet.)
So my general take on features “in the middle” is for now to treat them on a case by case basis, with some principles like “try hard to avoid revealing anything that’s not already public, and if doing so, try to leave plausible deniability bounded by some number of leaked bits, only reveal metadata or aggregate data, reveal it only to one other or a smaller set of users, think about whether this is actually a piece of info that seems high or low stakes, and see if you can get away with just using data from people who opted in to revealing it”.
I would strongly advocate against this kind of thought; any such decision-making procedure relies on the assumption that you correctly figure out all the ways such information can be used, and that there isn’t a clever way an adversary can extract more information than you had thought. This is bound to fail—people come up with clever ways to extract more private information than anticipated all the time. For example:
Timing Attacks on Web Privacy:
We describe a class of attacks that can compromise the privacy of users’ Web-browsing histories. The attacks allow a malicious Web site to determine whether or not the user has recently visited some other, unrelated Web page. The malicious page can determine this information by measuring the time the user’s browser requires to perform certain operations. Since browsers perform various forms of caching, the time required for operations depends on the user’s browsing history; this paper shows that the resulting time variations convey enough information to compromise users’ privacy.
Robust De-anonymization of Large Datasets (How to Break Anonymity of the Netflix Prize Dataset)
We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world’s largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber’s record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.
De-anonymizing Social Networks
We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate. Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy “sybil” nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary’s auxiliary information is small.
On the Anonymity of Home/Work Location Pairs
Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual’s home and workplace can both be deduced from a location trace, then the median size of the individual’s anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual’s home and work locations can both be deduced from the data.
Bubble Trouble: Off-Line De-Anonymization of Bubble Forms
Fill-in-the-bubble forms are widely used for surveys, election ballots, and standardized tests. In these and other scenarios, use of the forms comes with an implicit assumption that individuals’ bubble markings themselves are not identifying. This work challenges this assumption, demonstrating that fill-in-the-bubble forms could convey a respondent’s identity even in the absence of explicit identifying information. We develop methods to capture the unique features of a marked bubble and use machine learning to isolate characteristics indicative of its creator. Using surveys from more than ninety individuals, we apply these techniques and successfully reidentify individuals from markings alone with over 50% accuracy.
Those are some interesting papers, thanks for linking.
In the case at hand, I do disagree with your conclusion though.
In this situation, the most a user could find out is who checked them in dialogues. They wouldn’t be able to find any data about checks not concerning themselves.
If they happened to be a capable enough dev and were willing to go through the schleps to obtain that information, then, well… we’re a small team and the world is on fire, and I don’t think we should really be prioritising making Dialogue Matching robust to this kind of adversarial cyber threat for information of comparable scope and sensitivity! Folks with those resources could probably uncover all kinds of private vote data already, if they wanted to.
I agree that it wouldn’t be a very good use of your resources. But there’s a simple solution for that—only use data that’s already public and users have consented to you using. (Or offer an explicit opt-in where that isn’t the case.)
I do agree that in this specific instance, there’s probably little harm in the information being revealed. But I generally also don’t think that that’s the site admin’s call to make, even if I happen to agree with that call in some particular instances. A user may have all kinds of reasons to want to keep some information about themselves private, some of those reasons/kinds of information being very idiosyncratic and hard to know in advance. The only way to respect every user’s preferences for privacy, even the unusual ones, is by letting them control what information is used and not make any of those calls on their behalf.
Hmm, most of these don’t really apply here? Like, it’s not like we are proposing to do anything complicated. We are just saying “throw in some kind of sample function with a 5:1 ratio, ensure you can’t get redraws”. I feel like I can just audit the whole trail of implications myself here (and like, substantially more than I am e.g. capable of auditing all of our libraries for security vulnerabilities, which are sadly reasonably frequent in the JS land we are occupying).
My point is less about the individual example than the overall decision algorithm. Even if you’re correct that in this specific instance, you can verify the whole trail of implications and be certain that nothing bad happens, a general policy of “figure it out on a case-by-case basis and only do it when it feels safe” means that you’re probably going to make a mistake eventually, given how easy it is to make a mistake in this domain.
I am not sure what the alternative is. What decision-making algorithm do you suggest for adopting new server-side libraries that might have security vulnerabilities? Or to switch existing ones? My only algorithm is “figure it out on a case-by-case basis and only do it when it feels safe”. What’s the alternative?
For site libraries, there is indeed no alternative since you have to use some libraries to get anything done, so there you do have to do it on a case-by-case basis. In the case of exposing user data, there is an alternative—limiting yourself to only public data. (See also my reply to jacobjacob.)
If “show people who have checked you” is a thing that would improve the experience, then I as a user would appreciate a checkbox “users who you have checked and have not checked you can see that you have checked them”. I, for one, would check such a checkbox.
(If others want this too, upvote @faul_sname’s comment as a vote! It would be easy to build, most of my uncertainty is in how it would change the experience)
One level of privacy is “the site admins have access to data, but don’t abuse it”. But there are other levels, too. Let’s take something I assume is private in this way, the identities of who has up/downvoted a post or comment.
Can someone e.g. inspect the browser page or similar, in order to identify these identities?
Can someone look through the Forum Magnum Github repo and based on the open source code find ways to identify these identities?
Can someone on GreaterWrong identify these identities?
For other things that seem private (e.g. PMs), are any of those vulnerable to stuff like the above?
None of the votes or PMs are vulnerable in that way (barring significant coding errors). Posts have an overall score field, but the power or user id of a vote on a post or comment can’t be queried except that user or an admin.
I just got a “New users interested in dialoguing with you (not a match yet)” notification and when I clicked on it the first thing I saw was that exactly one person in my Top Voted users list was marked as recently active in dialogue matching. I don’t vote much so my Top Voted users list is in fact an All Voted users list. This means that either the new user interested in dialoguing with me is the one guy who is conspicuously presented at the top of my page, or it’s some random that I’ve never interacted with and have no way of matching.
This is technically not a privacy violation because it could be some random, but I have to imagine this is leaking more bits of information than you intended it to (it’s way more than a 5:1 update), so I figured I’d report it as a
bugunanticipated feature.It further occurs to me that anyone who was dedicated to extracting information from the system could completely deanonymize their matches by setting a simple script to scrape https://www.lesswrong.com/dialogueMatching every minute or so and cross-referencing “new users interested” notifications with the moment someone shoots to the top of the “recently active in dialogue matching” list. It sounds like you don’t care about that kind of attack though so I guess I’m mentioning it for completeness.
We thought of these things!
The notifications for matches only go out on a weekly basis, so I don’t think timing it would work. Also, we don’t sort users who clicked you in any way differently than other users on your page, so you might have been checked by a person who you haven’t voted much on.