With many jurisdictions introducing age verification laws for various things on the internet, a lot of questions have come up about implementation and privacy. I haven’t seen anyone come up with a real working example of how to implement it technically/cryptographically that don’t have any major flaws.
Setting aside the ethics of age verification and whether or not it’s a good idea - is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?
For an implementation to work, it should:
- Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)
- Not let the service know any other information about the user besides what they already learn through http or TCP/IP
- Not let a government or age verification authority know whenever a user is accessing 18+ content
- Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.
- Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware
- Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach
I think the first two points are fairly simple (lots of possible implementations with zero-knowledge proofs and anonymous signing keys, credentials with partial disclosure, authenticating with a trusted age verification system, etc. etc.)
The rest of the points are the difficult ones. Some children will circumvent any system (eg. By getting an adult to log in for them) but a working system should deter most children and require more than a quick download or a web search for instructions on how to circumvent.
The last point might already be a lost cause depending on your government, so unfortunately it’s probably not as important.
I’m pretty sure there is already a cryptographic protocol that can do this, but that’s not the point. We do NOT need age verification in software, it makes no sense. We need parents to take care of their own children because why would open-source software do the job of failed parenting? It’s a social issue, not something that can be solved with technology. Or we would have put shock-collars on every kids when they don’t behave.
Great idea, let’s get parents to raise their kids.
Now, how do we suddenly make them actually do that? Last I checked this idea has been around about as long as people have been around but it’s still not happening.
Parenting matters, but it’s not the only layer of protection. We don’t rely solely on parents to keep kids from walking into bars or buying cigarettes, we have laws and systems to back them up. Why should the internet be different?
You see, if we tell parents that it’s actually super important that they raise their kids, I’m sure they will do it. Just like if we tell everyone that a vaccine for a dangerous disease is a really good idea, everyone will just settle down and go get it.
How am I supposed to take care of my kids? My kid has got up at 3am and used his school device to do things I don’t want. The thing wasn’t supposed to be allow by the school but the bypass (web site not blocked) wasn’t one the school will find out and block. Bypasses like that spread fast in schools.
My point is that we can’t rely on parental oversight only because some plain won’t… and in your case, even actively trying may fail (it’s not your fault). And there’s always going to be loopholes in every system. Clever kids will get by most verifications, and if they don’t, that’s likely to mean the verification gets too invasive to be worth it. The best, though not perfect system is to have parental oversight + impartial verification + platform responsibility. This will reduce but not eradicate the problem.
+1 only because I can’t upvote more.
I’ll add one on your behalf
Despite our current parliament sucking ass, I still have some general trust in my country’s government (and culture). So with that in mind:
Our government bodies already have my basic data. Healthcare, census etc. and we use our online banking services to verify identity when accessing the data. It’s simple, and extremely widely used. I really don’t see why it would be so hard to make a relatively simple service that just gives sites that need to know a yes or no answer on if I’m over 18. They don’t need to know my birth date or any other information.
Not let a government or age verification authority know whenever a user is accessing 18+ content
This should be possible but of course the question is if one trusts the government to actually uphold this. Again, with my background, it’s a bit easier for me to speak.
Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.
You’ll never patch all the holes. In a perfect world, we wouldn’t be having this conversation. In a perfect world, parents would actually parent their kids and monitor their internet use. Access to adult content doesn’t even come close to being the biggest problem in many cases where some kids parents are fucking up their duties. Drugs, gangs, petty (and not so petty) crime comes to mind. Collective responsibility would be great but since we don’t live in a perfect world where everyone can just agree to a good idea like “take responsibility of your kids”, I’ll settle for trusting a democratic government to have some capacity to pick up those that fall.
I happen to agree with age verification laws. This is a tangent but I would also go a step further in saying that MAINSTREAM internet should not be possible to use without verifying that the user is a real individual person. This would be another yes/no question via a service. Outwardly they don’t have to reveal their identity but even JizzMcCumsocks needs to have a backend verification as a real person. Basically, if any government member uses some service with their own name and has a verification about that, that service must also have a way of verifying that any user is a real person. We have given Xitter way too much power and at the same time, allowed anonymity. Meta services too of course but I think Xitter is one of the worst due to easy and straight forward use. Humanity has shown that we are not equipped to handle the kind of (mis)information flow there is in these spaces. Spaces such as Lemmy can and should operate in full anonymity, as there are natural barriers to entry here, plus it’s less appealing when it’s not even really intended for the kind of use mainstream social media sites are. Here we have a collective and individual responsibility to account for the anonymity and the challenges it brings.
You know how there are stores that sell restricted substances and verify your age by checking a provided ID? Have those same stores sell a cheap, sealed card with a confirmation code on it. You can enter that code online to verify any service. The code expires after a set period of time after it’s first use to prevent sharing and misuse.
This system would be as secure as the restrictions on the restricted substance, such as alcohol, so it should be fine for “protecting the children”
Interesting idea. Could also give it out free with packs of beer like a golden ticket from Charlie And The Chocolate Factory.
And all across the whole world, 18 year old men will jump for joy when picking up birthday booze - “I can finally look at boobs on the internet!”
+1 At least in WA there are restrictions on licenced premises recording your ID, they are meant to just check it.
Parental Controls. Most devices have this setting. Parents need to be taught how to turn it on, and penalized when they don’t turn it on. This way there would be no centralized database that could be hacked thereby violating user privacy. Adults wouldn’t have to give up their government issued ID to websites.
Devices have them, but they are not very good. I’ma parent and there are thing i want to block that I can’t and others I want to allow but a on different rules than their system has.
When software poses a requirement, software should be ditched in favor of protocols. This is why any software that relies on a closed spec protocol should be avoided.
You’ll never see an age verification requirement on IRC or XMPP. And any software using these protocols that try to implement age verification will simply be left at the curbside, replaced by an alternative.
is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?
With your constraints yes, but there are open questions as to whether that would actually be enough.
Suppose there was a well-known government public key P_g, and a well protected corresponding government private key p_g, and every person i (i being their national identity number) had their own keypair p_i / P_i. The government would issue a certificate C_i including the date of birth and national identity number attesting that the owner of P_i has date of birth d.
Now when the person who knows p_i wants to access an age restricted site s, they generate a second site (or session) specific keypair P_s_i / p_s_i. They use ZK-STARKs to create a zero-knowledge proof that they have a C_i (secret parameter) that has a valid signature by P_g (public parameter), with a date of birth before some cutoff (DOB secret parameter, cutoff public parameter), and which includes key P_i (secret parameter), that they know p_i (secret parameter) corresponding to P_i, and that they know a hash h (secret parameter) such that
h = H(s | P_s_i | p_i | t), where t is an issue time (public parameter, and s and P_s_i are also public parameters. They send the proof transcript to the site, and authenticate to the site using their site / session specific P_s_i key.Know as to how this fits your constraints:
Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)
Yep - the service verifies the ZK-STARK proof to ensure the required properties hold.
Not let the service know any other information about the user besides what they already learn through http or TCP/IP
Due to the use of a ZKP, the service can only see the public parameters (plus network metadata). They’ll see P_s_i (session specific), the DOB cutoff (so they’ll know the user is born before the cutoff, but otherwise have no information about date of birth), and the site for which the session exists (which they’d know anyway).
Generating a ZK-STARK proof of a complexity similar to this (depending on the choice of hash, signing algorithm etc…) could potentially take about a minute on a fast desktop computer, and longer on slower mobile devices - so users might want to re-use the same proof across sessions, in which case this could let the service track users across sessions (although naive users probably allow this anyway through cookies, and privacy conscious users could pay the compute cost to generate a new session key every time).
Sites would likely want to limit how long proofs are valid for.
Not let a government or age verification authority know whenever a user is accessing 18+ content
In the above scheme, even if the government and the site collude, the zero-knowledge proof doesn’t reveal the linkage between the session key and the ID of the user.
Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.
An adult could share / leak their P_s_i and p_s_i keypair anonymously, along with the proof. If sites had a limited validity period, this would limit the impact of a one-off-leak.
If the adult leaks the p_i and C_i, they would identify themselves.
However, if there were adults willing to circumvent the system in a more online way, they could set up an online system which allows anyone to generate a proof of age and generates keypairs on demand for a requested site. It would be impossible to defend against such online attacks in general, and by the anonymity properties (your second and third constraints), there would never be accountability for it (apart from tracking down the server generating the keypairs if it’s a public offering, which would be quite difficult but not strictly impossible if it’s say a Tor hidden service). What would be possible would be to limit the number of sessions per user per day (by including a hash of s, p_i and the day as a public parameter), and perhaps for sites to limit the amount of content per session.
Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware
ZK-STARK proof generation can run on a CPU or GPU, and could be packaged up as say, a browser addon. The biggest frustration would be the proof generation time. It could be offloaded to cloud for users who trust the cloud provider but not the government or service provider.
Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach
Governments already store people’s date of birth (think birth certificates, passports, etc…), and would need to continue to do so to generate such certificates. They shouldn’t need to store extra information.
- Apply for access to age-gated content.
- Ignore application for 18 years.
- Your account has been approved.
For an implementation to work, it should: * Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government) * Not let the service know any other information about the user besides what they already learn through http or TCP/IP *
Seems like that’s exactly what https://yivi.app/en/ can do.
This thread is going to be great for learning to spot https://en.wikipedia.org/wiki/Nirvana_fallacy
Here’s one good answer: https://crypto.stackexchange.com/a/96283
It has the downside of requiring a physical device like a passport or some specific trusted long-running locally-kept identity store held by the user. But it’s otherwise very good.
Another option does not require anything extra be kept by the user, but does slightly compromise privacy. The Government will not be able to track each time the user tries to access age-gated content, or even know what sources of age-gated content are being accessed, but they will know how many different sites the user has requested access to. It works like this:
- The user creates or logs in to an account on the age-gated site.
- The site creates a token
Tthat can uniquely identify that user. - That token is then blinded
B(T). Nobody who receivesB(T)can learn anything about the user. - The user takes the token to the government age verification service (AVS).
- The user presents the AVS with
B(T)and whatever evidence is needed to verify age. - The AVS checks if the person should be verified. If not, we can end the flow here. If so, move on.
- The AVS signs the blinded token using a trusted AVS certificate,
S(B(T))and returns it to the user. - The user returns the token to the site.
- The site unblinds the token and obtains
S(T). This allows them to see that it is the same tokenTrepresenting the user, and to know that it was signed by the AVS, indicating that the user is of age. - The site marks in their database that the user has been age verified. On future visits to that site, the user can just log in as normal, no need to re-verify.
All of the moving around of the token can be automated by the browser/app, if it’s designed to be able to do that. Unfortunately a typical OAuth-style redirect system probably would not work (someone with more knowledge please correct me), because it would expose to the AVS what site the token is being generated for. So the behaviour would need to be created bespoke. Or a user could have a file downloaded and be asked to share it manually.
There’s also a potential exposure of information due to timing. If site X has a user begin the age verification flow at 8:01, and the AVS receives a request at 8:02, and the site receives a return response with a signed token at 8:05, then the government can, with a subpoena (or the consent of site X) work out that the user who started it at 8:01 and return at 8:05 is probably the same person who started verifying themselves at 8:02. Or at least narrow it down considerably. Making the redirect process manual would give the user the option to delay that, if they wanted even more privacy.
The site would probably want to store the unblinded, signed token, as long-term proof that they have indeed verified the user’s age with the AVS. A subsequent subpoena would not give the Government any information they could not have obtained from a subpoena in an un-age-verified system, assuming the token does not include a timestamp.






