In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.
Put all these creepy bastards on a publicly viewable list.
Didn’t they already do that in their public posts or whatever? They don’t care.
People are so fucking sick.
Won’t work and if it does work, the resulting image has little to nothing to do with the original.
Source: I opened a badly taken .raw file a few thousand times and I know what focal length means, come at me.
What does focal length means?
It’s the distance from the lens to the focal point, as in where the picture focuses on the sensor behind the lens. If you have a very long focal length like a telescope, you can see things further away but the range you can see is very small. With a short focal length you can’t see as far but you can see a much wider view. Check out this chart:

If you get very close to something with a short focal length or far away from it with a long focal length you can get essentially the same picture of a main subject (although what you can see in the background will be different), but even then a short lens will sort of taper your subject closer to a single point and a long lens will widen it. You can see this effect easily on faces: see this gif or this gif or this picture for an example.
Wow, what an amazing reply, thank you very much. Those images help a lot.
Do you have a good way to remember which way fast and slow f. stops go? I always have to trail and error when adjusting camera settings to go the right direction or especially listening to someone talk about aperture.
Wider open you let in more light, and want faster shutter speed, more closed you get less light and want a longer shutter speed.
And f stops work backwards. Think of it as percent of sensor covered. The bigger the number the more covered it is and the smaller the hole/aperture.
So Wide open = low coverage = small f stop -> lots of light -> “fast” shutter speed. And then the other way around. I think you finally worded it in a way it can stick in my brain! I like thinking about the f value as how much you’re covering the lens.
I like trying to simplify stuff to basic language and I am happy it was helpful
To add more specifics here for you, note that the f-stop is usually shown as a fraction, like f/2.8, f/4.0, etc.
So first of all, since the number is on the bottom of the fraction, there’s where you get smaller numbers = more light.
It’s also shown as a fraction because it’s a ratio, between your lens’s focal length (not focal distance to the subject) and the diameter of the aperture.
So if I’m taking a telephoto shot with my 70-200 @ 200 with the aperture wide open at f/2.8, that means the aperture should appear as 200/2.8 = 71.4mm. And that seems right to me! If you’re the subject looking into the lens the opening looks huge.
deleted by creator
I am so glad I no longer interact with that dumpster fire of a social network. It’s like the Elon takeover and the monetization program brought out every weirdo in the world out of the woodwork
deleted by creator
Some liberal on BlueSky tried to use genAI to unmask ICE agents.
Sounds about right for x users
late sex offender Jeffrey Epstein
I’m so done with all the whitewashing. “Sex offender” sounds like I behaved wrong in consensual sex. What this prick was is a pedophile. A child rapist. A kid-abuser and -rapist. But surely no “late financier” or whatever else media chose over the facts.
Also a slaver and child abductor.
And, it seems, murderer
Oh right, my bad 😐
Are these people fucking stupid? AI can’t remove something hardcoded to the image. The only way for it to “remove” it is by placing a different image over it, but since it has no idea what’s underneath, it would literally just be making up a new image that has nothing to do with the content of the original. Jfc, people are morons. I’m disappointed the article doesn’t explicitly state that either.
They think that the AI is smart enough to deduce from the pixels around it what the original face must have looked like, even though there’s actually no reason why there should be a strict causal relationship between those things.
The black boxes would be impossible, but there are some types of blur that keep enough of the original data they can be undone. There was a pedofile that used a swirl to cover his face in pictures and investigators were able to unswirl the images and identify him.
With how the rest of it has gone it wouldn’t surprise me if someone was incompetent enough to use a reversible one, although I have doubts Grok would do it properly.
Edit: this technique only works for video, but maybe if there are several pictures of the same person all blurred it could be used there too?
Yeah, but this type of machine learning and diffusion models used in image genAI are almost completely disjoint
Agree with you there. Just pointing out that in theory and with the right technique, some blurring methods can be undone. Grok most certainly is the wrong tool for the job.
Several years ago, authorities were searching the world for a guy who had been going around the world, molesting children, photographing them, and distributing them on the Internet. He was often in the photos, but he had chosen to use some sort of swirl blur on his face to hide it. The authorities just “unswirled” it, and there was his face, in all those photos of abused children.
They caught him soon after.
They couldn’t do that from one photo though, they’d need several examples all believed to be the same guy. A swirl like that preserves some of the information and you can reverse it, but the lost data is lost. Do that for several photos and you can get enough preserved bits to piece something together.
Same idea for some other kinds of blurs or mosaics. Black boxes, not so much - you e got no data to work with, so anything you tried to reconstruct would be more or less entirely fantasy.
A swirl is a distortion that is non-destructive. Am anonymity blur averages out pixels over a wide area in a repetitive manner, which destroys information. Would it be possible to reverse? Maybe a little bit. Maybe one pixel out of every %, but there wouldn’t be any way to prove the accuracy of that pixel and there would be massive gaps in information.
Swirl is destfuctive like almost everything in raster graphics with recompressing, but unswirling it back makes a good approximation in somehow reduced quality. If the program or a code of effect is known, e.g. they did it in Photoshop, you just drag a slider to the opposite side. Coming to think of it, it could be a nice puzzle in an adventure game or one another kind of captcha.
You’re right. I meant more by “non-destructive” that it is, depending on factors like intensity and known algorithm, reversible.
This is true that some blurs could be undone, but the ones used in the files are definitely destructive and cannot be undone. Grok and any other image generation tool is also definitely not capable of doing it. It requires knowledge of how it was blurred so you can use the same algorithm to undo it, models simply guess what it should look like.
There was someone who reported that due to the incompetence of whitehouse staffers, some of the Epstein files had simply been “redacted” in ms word by highlighting the text black, so people were actually able to remove the redactions by turning the pdf back into word and removing the black highlighting to reveal the text.
Who knows if some of the photos might be the same issue.
That’s, not how images like png or jpgs work.
In the case of what wound up on Roman Numeral Ten (formerly twitter) that’s correct, but given the actual PDF dump from the gov, if they just slapped an annotation on top of the image it’ll be possible to remove it and reveal what’s underneath.
I didn’t realise that they released the images as pdfs too.
It was simpler than that. You can just copy the black highlighter text and paste it anywhere.
“Hackers used advanced hacking to unredact the Epstein files!” - Actual headline. The “hackers” did just Ctrl+A, Ctrl+C, opens word processor, Ctrl+V
Ctrl+A, Ctrl+C, opens word processor, Ctrl+V
DID YOU JUST DOWNLOAD A VIRUS ON MY KEYBOARD?
No regrets! runs away with all of your data in a comically large sack
Hey! Cut it out! If those people could read, they’d be very upset!
deleted by creator
Actually, there is a short video on that page that explains this with examples
Video ≠ article
How do these AI models generate nude imagery of children without having been trained with data containing illegal images of nude children?
The datasets they are trained on do in fact include CSAM. These datasets are so huge that it easily slips through the cracks. It’s usually removed whenever it’s found, but I don’t know how this actually affects the AI models that have already been trained on that data — to my knowledge, it’s not possible to selectively “untrain” models, and they would need to be retrained from scratch. Plus I occasionally see it crop up in the news about how new CSAM keeps being found in the training data.
It’s one of the many, many problems with generative AI
Can’t ask them to sort that out. Are you anti-ai? That’s a crime! /s
Easy answer is , they don’t
Though that’s just the one admitting to it.
A lightly more nuanced answer is , it probably depends, there’s likely to be some inference made between age ranges but my guess is that it’d be sub-par given that it sometimes struggles with reproducing images it has a tonne of actual data for.
Tbf it’s not needed. If it can draw children and it can draw nude adults, it can draw nude children.
Just like it doesn’t need to have trained on purple geese to draw one. It just needs to know how to draw purple things and how to draw geese.
I don’t think so. Speaking as a parent.
What you don’t think?
Why does being a parent give any authority in this conversation?
I have changed diapers and can attest to the anatomical differences between child and adult, and therefore know AI cannot extrapolate that difference without accurate data clarifying these differences. AI would hallucinate something absurd or impossible without real image data trained in its model.
We have all been children, we all know the anatomical differences.
It’s not like children are alien, most differences are just “this is smaller and a slightly different shape in children”. Many of those differences can be seen on fully clothed children. And for the rest, there are non-CSAM images that happen to have nude children. As I said earlier, it is not uncommon for children to be fully nude in beaches.
We are human beings. AI is not. It never had that experience of being or caring for a child. It does not (or should not) have that data in its dataset.
that’s not true, a child and an adult are not the same. and ai can not do such things without the training data. it’s the full wine glass problem. and the only reason THAT example was fixed after it was used to show the methodology problem with AI, is because they literally trained it for that specific thing to cover it up.
I’m not saying it wasnt trained on csam or defending any AI.
But your point isn’t correct
What prompts you use and how you request changes can get same results. Clever prompts already circumvent many hard wired protections. It’s a game of whackamole and every new iteration of an AI will require different methods needed bypass those protections.
If you can ask it the right ways it will do whatever a prompt tells it to do
!You can’t tell it to make a nude image of a child, I assume, but you can tell it make the subject in the image of the last prompt 60% smaller and adjust it as necessary to make it believable.!< That probably shouldnt work but I don’t put anything passed these assholes.
It doesn’t take actual images/data trained if you can just tell it how to get the results you want it to by using different language that it hasn’t been told not to accept.
The AI doesn’t know what it is doing, it’s simply running points through its system and outputting the results.
It still seems pretty random. So they’ll say they fixed it so it won’t do something, all they likely did was reduce probability, so we still get screenshots showing what it sometimes lets through.
That’s not exactly true. I don’t know about today, but I remember about a year ago reading an article about an image generation model not being able, with many attempts, to generate a wine glass full to the brim, because all the wine glasses the model was trained on were half-filled.
Did it have any full glasses of water? According to my theory, It has to have data for both “full” and “wine”
Your theory is more or less incorrect. It can’t interpolate as broadly as you think it can.
The wine thing could prove me wrong if someone could answer my question.
But I don’t think my theory is that wild. LLMs can interpolate, and that is a fact. You can ask it to make a bear with duck hands and it will do it. I’ve seen images on the internet of things similar to that generated by LLMs.
Who is to say interpolating nude children from regular children+nude adults is too wild?
Furthermore, you don’t need CSAM for photos of nude children.
Children are nude at beaches all the time, there probably are many photos on the internet where there are nude children in the background of beach photos. That would probably help the LLM.
You are confusing LLMs with diffusion models. LLMs generate text, not images. They can be used as inputs to diffusion models and are thus usually intertwined but are not responsible for generating the images themselves. I am not completely refuting your point in general. Generative models are capable of generalising to an extend, so it is possible that such a system would be able to generate such images without having seen them. But how anatomically correct that would be is an entirely different question and the way these companies vastly sweep through the internet makes it very possible that these images were part of the training
Well yes, the LLMs are not the ones that actually generate the images. They basically act as a translator between the image generator and the human text input. Well, just the tokenizer probably. But that’s beside the point. Both LLMs and image generators are generative AI. And have similar mechanisms. They both can create never-before seen content by mixing things it has “seen”.
I’m not claiming that they didn’t use CSAM to train their models. I’m just saying that’s this is not definitive proof of it.
It’s like claiming that you’re a good mathematician because you can calculate 2+2. Good mathematicians can do that, but so can bad mathematicians.
unblur the face with 1000% accuracy
They have no idea how this models work :D

biblically accurate cw casting
CW? The TV show?
Barrett O’Brien
It’s the same energy as “don’t hallucinate and just say if you don’t know the answer”
and don’t forget “make no mistakes” :D
Though it is 2026. Who’s to say Elon didn’t feed the unredacted files into grok while out of his face on ket 🙃
It feels like being back on the playground
“nuh uh, my laser is 1000% more powerful”
“oh yea, mine is
googleplexgoogolplex percent more powerful”Wait, what? My son has been using “googleplex” when he wants a really big number. I thought it was a weird word he made up. I guess it’s a thing…
It is, with a slight different spelling. A googol is 10^100, a googolplex is a 10^(googol) or written conventionally, a one followed by a metric shit ton of zeros.
I wondered if the word had something to do with a googol (I learned that word from World Book Encyclopedia kids books), but I figured my young son didn’t know that word yet and just invented some word using Google. Crazy how language can get around on the playground.
My son also uses it frequently, he learnt that word from a Captain Underpants book or one of the other works from dav pilkey so maybe its from there
We just started reading those books at bedtime so I’ll be able to report if I see that word in there.
Fun fact, Google was supposed to be named Googol, but the guy who were tasked with ordering the domain name misunderstood. As history would tell, they just decided to stick with Google.
Enhance!
Uncrop!
Or percentages
I doubt any of these people are accessing X over Tor. Their accounts and IPs are known.
In a sane world, they’d be prosecuted.
In MAGAMERICA, they are protected by the Spirit of EpsteinWhat crime do you imagine they would be committing?
I don’t know what they hope to gain by seeing the kid’s face, unless they think they can match it up with an Epstein family member or something (seems unlikely to be their goal).
Of course they are. Who’s left on Twitter nowadays? Elon acolytes?
When I realized that tweets from paid account’s always stuck at top, Really?? I immediatily stopped using it.
And gruk, being trained on elons web history, doesn’t need to be asked to find, let alone unblur said images.
So my company was involved with a lawsuit that I was asked to help review files and redact information. They used a specific software that all the files were loaded into and the software performed the redactions and saved the redacted files. It really is mind blowing the government wouldn’t use a similar process.
These are the clowns that redacted the first files with MS black highlight, because DOGE cut their Adobe accounts.


















