I’ll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn’t want people to read, and that you, dear reader of this comment, probably can’t!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.
This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.
You can make them hard enough to access that they won’t be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong’s reputation.
I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.
If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM’d video that can’t be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.
tl;dr: No, you can’t stop people from copying things on the Internet.
Of course. But a “good enough” solution to the stated problem doesn’t need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:
Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there’s an organized conspiracy out there against us?
Limit access to people who can be trusted not to copy it—either based on karma as suggested, or individual vetting. I’m not a fan of this option, but it could certainly be made to work, for certain values of “work”.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
I can’t help but see two major flaws:
1) If I link to a major, encrypted offshoot of LessWrong, people will AUTOMATICALLY be suspicious and it will damage PR.
2) Why would it be any easier to cry “it’s a forgery” in this situation versus me posting a screenshot of an unencrypted forum? o.o Especially given #1...
3) I can share my password / decryption key / etc..
Well, point 3 can be eliminated by proper use of crypto. See OTR
The response to point 2 is that by having it be publicly known to everyone that messages’ contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.
Point 1, however, I agree with completely, and that’s why I’m not actually advocating this solution.
You’re flat-out wrong about #3. Encryption is just a mathematical algorithm, it doesn’t care who uses it, only that you have the key.
In short, encryption is just a very complex function, so you feed Key + Message in, and you get an Output. f(K,M) = O
I already have access to Key and Message, so I can share both of those. The only thing you can possibly secure is f().
If you have a cryptographic program, like OTR, I can just decompile it and get f(), and then post a modified version that lets the user manually configure their key (I think this is actually trivial in OTR, but it’s been years since I poked at it)
If it’s a website where I login and it auto-decrypts things for me, then I can just send someone the URL and the key I use.
Point 2 seems to reply on Point 3, and as far as I’m aware the only formally mathematically provably deniable method WHEN THE KEY IS COMPROMISED is a one-time pad.
I’m not sure how much crypto experience you have, but “and no one else knows the key” is a foundation of every algorithm I have ever worked on, and I’m reasonably confident that it’s a mathematical requirement. I simply cannot imagine how you could possibly write a crypto algorithm that is secure EVEN with a compromised key.
EDIT: If you still think I’m wrong, can you please give me a sense of your crypto experience? For reference: I’ve met with the people who wrote OTR and hang out in a number of crypto circles, but only do fairly basic stuff in my actual work. I do still have a hobby interest in it, and follow it, but the last time I did any serious code breaking was about a decade ago.
You seem to be using a very narrow definition of “crypto”.. I’m not sure whether you’re just being pedantic about definitions, in which case you may be correct, or if you’re actually disputing the substance of what I’m saying. To answer your question, I’m not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)… it’s been several years since I read the relevant papers, so I might be getting some of the details wrong in how I’m explaining it, but the basic concept of deniable message authentication is something that’s well understood by mainstream cryptographers.
You seem to be aware of the existence of OTR, so I’m confused—are you claiming that it doesn’t accomplish what it says it does? Or just that something about the way I’m proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I’ve put a grand total of about 5 minutes thought into it… if that’s the case I’d be curious to know what are the relevant assumptions my proposed use case would break?
This must be why the media companies haven’t given up on DRM yet. They think if they can just unmask and arrest the ringleaders of the “organized conspiracy out there” then copy protection will start working, when in reality any random person can become a “conspiracy” member with nothing more than a little technical knowledge, a little free time, and a moral code that encourages copying.
To be fair, the “vetting” and “full deniability” options don’t really apply to the ??AA. The best pre-existing example for those kinds of policies might be the Freemasons or the Mormons? In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
Exactly. That’s why I’m not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.
The solution that I’m actually advocating is even simpler still: do nothing. Rely on self-policing and the “don’t be an asshole” principle, and in the event that that fails (which it hasn’t yet), then counter bad speech with more speech: clearly state “LW/SIAI does not endorse this suggestion, and renounces the use of violence.” If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em—haters gonna hate.
Are you kidding? Sign me up as a volunteer polyglot programmer, then!
Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn’t get any, but rather that you tried to work with volunteers and something else didn’t work out.
Maybe it’s just that volunteers that will actually do any work are hard to find. Related.
Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.
I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.
It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn’t need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.
I like the idea, but I have to agree that the PR cost of such a thing being leaked is probably vastly worse than simply being open about it in the first place.
I’ll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn’t want people to read, and that you, dear reader of this comment, probably can’t!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.
This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.
Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!
You can’t reliably make things on the internet go away.
You can make them hard enough to access that they won’t be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong’s reputation.
I was thinking about people posting screen shots.
Agreed. It only takes one high-karma user posting a screenshot on reddit of LW’s Secret Thread Where They Discuss Terrorism or whatever...
I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.
If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM’d video that can’t be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.
tl;dr: No, you can’t stop people from copying things on the Internet.
Of course. But a “good enough” solution to the stated problem doesn’t need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:
Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there’s an organized conspiracy out there against us?
Limit access to people who can be trusted not to copy it—either based on karma as suggested, or individual vetting. I’m not a fan of this option, but it could certainly be made to work, for certain values of “work”.
Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
I can’t help but see two major flaws:
1) If I link to a major, encrypted offshoot of LessWrong, people will AUTOMATICALLY be suspicious and it will damage PR.
2) Why would it be any easier to cry “it’s a forgery” in this situation versus me posting a screenshot of an unencrypted forum? o.o Especially given #1...
3) I can share my password / decryption key / etc..
Well, point 3 can be eliminated by proper use of crypto. See OTR
The response to point 2 is that by having it be publicly known to everyone that messages’ contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.
Point 1, however, I agree with completely, and that’s why I’m not actually advocating this solution.
You’re flat-out wrong about #3. Encryption is just a mathematical algorithm, it doesn’t care who uses it, only that you have the key.
In short, encryption is just a very complex function, so you feed Key + Message in, and you get an Output. f(K,M) = O
I already have access to Key and Message, so I can share both of those. The only thing you can possibly secure is f().
If you have a cryptographic program, like OTR, I can just decompile it and get f(), and then post a modified version that lets the user manually configure their key (I think this is actually trivial in OTR, but it’s been years since I poked at it)
If it’s a website where I login and it auto-decrypts things for me, then I can just send someone the URL and the key I use.
Point 2 seems to reply on Point 3, and as far as I’m aware the only formally mathematically provably deniable method WHEN THE KEY IS COMPROMISED is a one-time pad.
I’m not sure how much crypto experience you have, but “and no one else knows the key” is a foundation of every algorithm I have ever worked on, and I’m reasonably confident that it’s a mathematical requirement. I simply cannot imagine how you could possibly write a crypto algorithm that is secure EVEN with a compromised key.
EDIT: If you still think I’m wrong, can you please give me a sense of your crypto experience? For reference: I’ve met with the people who wrote OTR and hang out in a number of crypto circles, but only do fairly basic stuff in my actual work. I do still have a hobby interest in it, and follow it, but the last time I did any serious code breaking was about a decade ago.
You seem to be using a very narrow definition of “crypto”.. I’m not sure whether you’re just being pedantic about definitions, in which case you may be correct, or if you’re actually disputing the substance of what I’m saying. To answer your question, I’m not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)… it’s been several years since I read the relevant papers, so I might be getting some of the details wrong in how I’m explaining it, but the basic concept of deniable message authentication is something that’s well understood by mainstream cryptographers.
You seem to be aware of the existence of OTR, so I’m confused—are you claiming that it doesn’t accomplish what it says it does? Or just that something about the way I’m proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I’ve put a grand total of about 5 minutes thought into it… if that’s the case I’d be curious to know what are the relevant assumptions my proposed use case would break?
If I give you my key, you can pretend to be me on OTR. I’ve had friends demonstrate this to me, but I’ve never done it myself, so 99% confidence.
Technical disagreement, as near as I can tell, since you’re not advocating for the solution.
This must be why the media companies haven’t given up on DRM yet. They think if they can just unmask and arrest the ringleaders of the “organized conspiracy out there” then copy protection will start working, when in reality any random person can become a “conspiracy” member with nothing more than a little technical knowledge, a little free time, and a moral code that encourages copying.
To be fair, the “vetting” and “full deniability” options don’t really apply to the ??AA. The best pre-existing example for those kinds of policies might be the Freemasons or the Mormons? In neither case would I be confident that the bad PR they’ve avoided by hiding embarrassing things hasn’t been worse than the bad PR they’ve abetted by obviously dissembling and/or by increasing the suspicion that they’re hiding even worse things.
Exactly. That’s why I’m not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.
The solution that I’m actually advocating is even simpler still: do nothing. Rely on self-policing and the “don’t be an asshole” principle, and in the event that that fails (which it hasn’t yet), then counter bad speech with more speech: clearly state “LW/SIAI does not endorse this suggestion, and renounces the use of violence.” If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em—haters gonna hate.
Not a bad option indeed. It has a merit if we are really that bothered about the general view of LW.
And for the record the post is still accessible albeit deleted.
LW has effectively zero resources to implement software changes.
If this were your real rejection, you would be asking for volunteer software-engineer-hours.
Tried.
Are you kidding? Sign me up as a volunteer polyglot programmer, then!
Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn’t get any, but rather that you tried to work with volunteers and something else didn’t work out.
Maybe it’s just that volunteers that will actually do any work are hard to find. Related.
Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.
I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.
It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn’t need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.
The site is open source, you should be able to just write a patch and submit it.
This would be a poor investment of time without first getting a commitment from Eliezer that he will accept said patch.
It’d get you familiar with the code base, which you’d need to be anyway if you wanted to be a volunteer contributor.
After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into “issues for which contributions are welcome” versus “issues which we want to fix ourselves”.
The path to becoming a volunteer contributor is now very clear.
Getting someone to sort a list, even on an ongoing basis, is not functionally useful if there’s nobody to take action on the sorted list.
I like the idea, but I have to agree that the PR cost of such a thing being leaked is probably vastly worse than simply being open about it in the first place.