If you have the developer time for it, have you considered building a cryptocurrency-based firewall? Pay $1 to whitelist your IPv6 range in the firewall.
What to do with non-whitelisted IPs is up to you, you could limit the bandwidth for them.
I suggest this because the endgame of the IP address doxxing performed by companies like cloudflare is the death of anonymity on the internet. Each ISP has a finite IP range and a finite number of optical fiber cables, so there’s only so many times someone can change their IP address.
(Sure the NSA probably knows who you are anyway, but IP ranges mapped to real names by random companies are eventually going to end up sold on the dark web to basically anyone with money.)
Interesting thought. I tend to agree that the endgame of … protection from scalable attacks in general … is lack of anonymity. Without identity, there can be no memory of behavior, and no prevention of abuse that’s only harmful across multiple events/sources. I suspect it’s a long way out, though.
Your proposed solution (paid IP whitelisting) is pretty painful—the vast majority of real users (and authorized scrapers) don’t have a persistent enough address, or at least don’t know that they do, to participate.
Hi! Created a (named) account for this—in fact, I think you can conceptually get some of those reputational defenses (memory of behavior; defense against multi-event attacks) without going so far as to drop anonymity / prove one’s identity!
Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: “personhood credentials” (PHCs), digital credentials that empower users to demonstrate that they are real people—not AIs—to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions—governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI’s increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI’s increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and “proof-of-personhood” systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception—such as CAPTCHAs—are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
This seems just like regular auth, just using a trusted 3P to re-anonymize. Maybe I’m missing something, though. It seems likely it won’t provide much value if it’s unbreakably anonymous (because it only takes a few stolen credentials to give an attacker access to fake-humanity), and doesn’t provide sufficient anonymity for important uses if it’s escrowed (such that the issuer CAN track identity and individual usage, even if they currently choose not to).
Yeah I appreciate the engagement, I don’t think either of those is a knock-down objection though:
The ability to illicitly gain a few credentials —> >1 account is still meaningfully different from being able to create ~unbounded accounts. It is true this means a PHC doesn’t 100% ensure a distinct person, but it can still be a pretty high assurance and significantly increase the cost of doing attacks that depend on scale.
Re: the second point, I’m not sure I fully understand—say more? By our paper’s definitions, issuers wouldn’t be able to merely choose to identify individuals. In fact, even if an issuer and service-provider colluded, PHCs are meant to be robust to this. (Devil is in the details of course.)
If you have the developer time for it, have you considered building a cryptocurrency-based firewall? Pay $1 to whitelist your IPv6 range in the firewall.
What to do with non-whitelisted IPs is up to you, you could limit the bandwidth for them.
I suggest this because the endgame of the IP address doxxing performed by companies like cloudflare is the death of anonymity on the internet. Each ISP has a finite IP range and a finite number of optical fiber cables, so there’s only so many times someone can change their IP address.
(Sure the NSA probably knows who you are anyway, but IP ranges mapped to real names by random companies are eventually going to end up sold on the dark web to basically anyone with money.)
Interesting thought. I tend to agree that the endgame of … protection from scalable attacks in general … is lack of anonymity. Without identity, there can be no memory of behavior, and no prevention of abuse that’s only harmful across multiple events/sources. I suspect it’s a long way out, though.
Your proposed solution (paid IP whitelisting) is pretty painful—the vast majority of real users (and authorized scrapers) don’t have a persistent enough address, or at least don’t know that they do, to participate.
Hi! Created a (named) account for this—in fact, I think you can conceptually get some of those reputational defenses (memory of behavior; defense against multi-event attacks) without going so far as to drop anonymity / prove one’s identity!
See my Twitter thread here, summarizing our paper on Personhood Credentials.
Paper’s abstract:
This seems just like regular auth, just using a trusted 3P to re-anonymize. Maybe I’m missing something, though. It seems likely it won’t provide much value if it’s unbreakably anonymous (because it only takes a few stolen credentials to give an attacker access to fake-humanity), and doesn’t provide sufficient anonymity for important uses if it’s escrowed (such that the issuer CAN track identity and individual usage, even if they currently choose not to).
Yeah I appreciate the engagement, I don’t think either of those is a knock-down objection though:
The ability to illicitly gain a few credentials —> >1 account is still meaningfully different from being able to create ~unbounded accounts. It is true this means a PHC doesn’t 100% ensure a distinct person, but it can still be a pretty high assurance and significantly increase the cost of doing attacks that depend on scale.
Re: the second point, I’m not sure I fully understand—say more? By our paper’s definitions, issuers wouldn’t be able to merely choose to identify individuals. In fact, even if an issuer and service-provider colluded, PHCs are meant to be robust to this. (Devil is in the details of course.)