I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.
If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM’d video that can’t be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.
tl;dr: No, you can’t stop people from copying things on the Internet.
Of course. But a “good enough” solution to the stated problem doesn’t need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:
Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there’s an organized conspiracy out there against us?
Limit access to people who can be trusted not to copy it—either based on karma as suggested, or individual vetting. I’m not a fan of this option, but it could certainly be made to work, for certain values of “work”.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
I can’t help but see two major flaws:
1) If I link to a major, encrypted offshoot of LessWrong, people will AUTOMATICALLY be suspicious and it will damage PR.
2) Why would it be any easier to cry “it’s a forgery” in this situation versus me posting a screenshot of an unencrypted forum? o.o Especially given #1...
3) I can share my password / decryption key / etc..
Well, point 3 can be eliminated by proper use of crypto. See OTR
The response to point 2 is that by having it be publicly known to everyone that messages’ contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.
Point 1, however, I agree with completely, and that’s why I’m not actually advocating this solution.
You’re flat-out wrong about #3. Encryption is just a mathematical algorithm, it doesn’t care who uses it, only that you have the key.
In short, encryption is just a very complex function, so you feed Key + Message in, and you get an Output. f(K,M) = O
I already have access to Key and Message, so I can share both of those. The only thing you can possibly secure is f().
If you have a cryptographic program, like OTR, I can just decompile it and get f(), and then post a modified version that lets the user manually configure their key (I think this is actually trivial in OTR, but it’s been years since I poked at it)
If it’s a website where I login and it auto-decrypts things for me, then I can just send someone the URL and the key I use.
Point 2 seems to reply on Point 3, and as far as I’m aware the only formally mathematically provably deniable method WHEN THE KEY IS COMPROMISED is a one-time pad.
I’m not sure how much crypto experience you have, but “and no one else knows the key” is a foundation of every algorithm I have ever worked on, and I’m reasonably confident that it’s a mathematical requirement. I simply cannot imagine how you could possibly write a crypto algorithm that is secure EVEN with a compromised key.
EDIT: If you still think I’m wrong, can you please give me a sense of your crypto experience? For reference: I’ve met with the people who wrote OTR and hang out in a number of crypto circles, but only do fairly basic stuff in my actual work. I do still have a hobby interest in it, and follow it, but the last time I did any serious code breaking was about a decade ago.
You seem to be using a very narrow definition of “crypto”.. I’m not sure whether you’re just being pedantic about definitions, in which case you may be correct, or if you’re actually disputing the substance of what I’m saying. To answer your question, I’m not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)… it’s been several years since I read the relevant papers, so I might be getting some of the details wrong in how I’m explaining it, but the basic concept of deniable message authentication is something that’s well understood by mainstream cryptographers.
You seem to be aware of the existence of OTR, so I’m confused—are you claiming that it doesn’t accomplish what it says it does? Or just that something about the way I’m proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I’ve put a grand total of about 5 minutes thought into it… if that’s the case I’d be curious to know what are the relevant assumptions my proposed use case would break?
This must be why the media companies haven’t given up on DRM yet. They think if they can just unmask and arrest the ringleaders of the “organized conspiracy out there” then copy protection will start working, when in reality any random person can become a “conspiracy” member with nothing more than a little technical knowledge, a little free time, and a moral code that encourages copying.
To be fair, the “vetting” and “full deniability” options don’t really apply to the ??AA. The best pre-existing example for those kinds of policies might be the Freemasons or the Mormons? In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
Exactly. That’s why I’m not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.
The solution that I’m actually advocating is even simpler still: do nothing. Rely on self-policing and the “don’t be an asshole” principle, and in the event that that fails (which it hasn’t yet), then counter bad speech with more speech: clearly state “LW/SIAI does not endorse this suggestion, and renounces the use of violence.” If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em—haters gonna hate.
I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.
If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM’d video that can’t be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.
tl;dr: No, you can’t stop people from copying things on the Internet.
Of course. But a “good enough” solution to the stated problem doesn’t need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:
Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there’s an organized conspiracy out there against us?
Limit access to people who can be trusted not to copy it—either based on karma as suggested, or individual vetting. I’m not a fan of this option, but it could certainly be made to work, for certain values of “work”.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
I can’t help but see two major flaws:
1) If I link to a major, encrypted offshoot of LessWrong, people will AUTOMATICALLY be suspicious and it will damage PR.
2) Why would it be any easier to cry “it’s a forgery” in this situation versus me posting a screenshot of an unencrypted forum? o.o Especially given #1...
3) I can share my password / decryption key / etc..
Well, point 3 can be eliminated by proper use of crypto. See OTR
The response to point 2 is that by having it be publicly known to everyone that messages’ contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.
Point 1, however, I agree with completely, and that’s why I’m not actually advocating this solution.
You’re flat-out wrong about #3. Encryption is just a mathematical algorithm, it doesn’t care who uses it, only that you have the key.
In short, encryption is just a very complex function, so you feed Key + Message in, and you get an Output. f(K,M) = O
I already have access to Key and Message, so I can share both of those. The only thing you can possibly secure is f().
If you have a cryptographic program, like OTR, I can just decompile it and get f(), and then post a modified version that lets the user manually configure their key (I think this is actually trivial in OTR, but it’s been years since I poked at it)
If it’s a website where I login and it auto-decrypts things for me, then I can just send someone the URL and the key I use.
Point 2 seems to reply on Point 3, and as far as I’m aware the only formally mathematically provably deniable method WHEN THE KEY IS COMPROMISED is a one-time pad.
I’m not sure how much crypto experience you have, but “and no one else knows the key” is a foundation of every algorithm I have ever worked on, and I’m reasonably confident that it’s a mathematical requirement. I simply cannot imagine how you could possibly write a crypto algorithm that is secure EVEN with a compromised key.
EDIT: If you still think I’m wrong, can you please give me a sense of your crypto experience? For reference: I’ve met with the people who wrote OTR and hang out in a number of crypto circles, but only do fairly basic stuff in my actual work. I do still have a hobby interest in it, and follow it, but the last time I did any serious code breaking was about a decade ago.
You seem to be using a very narrow definition of “crypto”.. I’m not sure whether you’re just being pedantic about definitions, in which case you may be correct, or if you’re actually disputing the substance of what I’m saying. To answer your question, I’m not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)… it’s been several years since I read the relevant papers, so I might be getting some of the details wrong in how I’m explaining it, but the basic concept of deniable message authentication is something that’s well understood by mainstream cryptographers.
You seem to be aware of the existence of OTR, so I’m confused—are you claiming that it doesn’t accomplish what it says it does? Or just that something about the way I’m proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I’ve put a grand total of about 5 minutes thought into it… if that’s the case I’d be curious to know what are the relevant assumptions my proposed use case would break?
If I give you my key, you can pretend to be me on OTR. I’ve had friends demonstrate this to me, but I’ve never done it myself, so 99% confidence.
Technical disagreement, as near as I can tell, since you’re not advocating for the solution.
This must be why the media companies haven’t given up on DRM yet. They think if they can just unmask and arrest the ringleaders of the “organized conspiracy out there” then copy protection will start working, when in reality any random person can become a “conspiracy” member with nothing more than a little technical knowledge, a little free time, and a moral code that encourages copying.
To be fair, the “vetting” and “full deniability” options don’t really apply to the ??AA. The best pre-existing example for those kinds of policies might be the Freemasons or the Mormons? In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
Exactly. That’s why I’m not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.
The solution that I’m actually advocating is even simpler still: do nothing. Rely on self-policing and the “don’t be an asshole” principle, and in the event that that fails (which it hasn’t yet), then counter bad speech with more speech: clearly state “LW/SIAI does not endorse this suggestion, and renounces the use of violence.” If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em—haters gonna hate.