Is there any way to use cryptography to enable users to verify their humanity to a website, without revealing their identity? Maybe by “outsourcing” the verification to another entity – e.g. governments distribute private keys to each citizen, which the citizen uses to generate anonymous but verifiable codes to give to websites.
You can, for example, hash your gmail address and an authentication code from a Google two-factor-authentication app to any website that wants to confirm you’re human. The website then uploads to hash to a Google API. Google verifies that you are both using a human gmail account (with a registered credit card, phone number, ect.) and are posting at a human-possible rate. The website then allows you to post.
So it can already be done with existing services. There’s probably a specific Google API that does exactly that.
Wait, what? That doesn’t verify humanity, it only verifies that the sending agent has access to that gmail account. AI (and the humans who’re profiting off use of AI) have all the gmail accounts they need.
There’s probably no cryptography operation that a human can do which a LLM or automated system can’t. Any crypto signature or verification you’re doing to prevent humans impersonating other humans will continue to work, but how many people do you interact with that you know well enough to exchange keys offline to be sure it’s really them?
Is there any way to use cryptography to enable users to verify their humanity to a website, without revealing their identity? Maybe by “outsourcing” the verification to another entity – e.g. governments distribute private keys to each citizen, which the citizen uses to generate anonymous but verifiable codes to give to websites.
You can, for example, hash your gmail address and an authentication code from a Google two-factor-authentication app to any website that wants to confirm you’re human. The website then uploads to hash to a Google API. Google verifies that you are both using a human gmail account (with a registered credit card, phone number, ect.) and are posting at a human-possible rate. The website then allows you to post.
So it can already be done with existing services. There’s probably a specific Google API that does exactly that.
Wait, what? That doesn’t verify humanity, it only verifies that the sending agent has access to that gmail account. AI (and the humans who’re profiting off use of AI) have all the gmail accounts they need.
There’s probably no cryptography operation that a human can do which a LLM or automated system can’t. Any crypto signature or verification you’re doing to prevent humans impersonating other humans will continue to work, but how many people do you interact with that you know well enough to exchange keys offline to be sure it’s really them?