Some powerful agents (say secret services or the government of… let’s say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that’d force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it’d at least be known who talked to who, and that’s the kind of information that’s apparently valued worth billions of dollars and a couple of civil rights. Correct?
But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?
But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can’t make your bots too easy to identify—but as far as it is still possible, you can simply throw in more and more bots.
How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.
And then anyone who benefits from disrupting all anonymous electronic communication can—and will—do so. Non-anonymity will be promoted as “a small price to pay” to get rid of the bot plague, and everyone will live happy ever after—except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They’ll retreat into non-electronic communication of course, but that can’t be the way forward, can it?
Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that’s not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.
And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.
Here is good write-up of how spam wars work in real life.
Spam wars in real life use mechanisms that don’t work in fully anonymous networks like Freenet. You can’t filter by IP in a network without IPs.
Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they’re one of the few things that work. Webs of Trust, which I explicitly mentioned, are another—they just don’t scale well.
How would breaking captchas break anonymous communications?
Some powerful agents (say secret services or the government of… let’s say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that’d force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it’d at least be known who talked to who, and that’s the kind of information that’s apparently valued worth billions of dollars and a couple of civil rights. Correct?
But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?
But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can’t make your bots too easy to identify—but as far as it is still possible, you can simply throw in more and more bots.
How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.
And then anyone who benefits from disrupting all anonymous electronic communication can—and will—do so. Non-anonymity will be promoted as “a small price to pay” to get rid of the bot plague, and everyone will live happy ever after—except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They’ll retreat into non-electronic communication of course, but that can’t be the way forward, can it?
Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that’s not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.
And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.
Here is good write-up of how spam wars work in real life.
Spam wars in real life use mechanisms that don’t work in fully anonymous networks like Freenet. You can’t filter by IP in a network without IPs.
Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they’re one of the few things that work. Webs of Trust, which I explicitly mentioned, are another—they just don’t scale well.