- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Research Findings:
- reCAPTCHA v2 is not effective in preventing bots and fraud, despite its intended purpose
- reCAPTCHA v2 can be defeated by bots 70-100% of the time
- reCAPTCHA v3, the latest version, is also vulnerable to attacks and has been beaten 97% of the time
- reCAPTCHA interactions impose a significant cost on users, with an estimated 819 million hours of human time spent on reCAPTCHA over 13 years, which corresponds to at least $6.1 billion USD in wages
- Google has potentially profited $888 billion from cookies [created by reCAPTCHA sessions] and $8.75–32.3 billion per each sale of their total labeled data set
- Google should bear the cost of detecting bots, rather than shifting it to users
“The conclusion can be extended that the true purpose of reCAPTCHA v2 is a free image-labeling labor and tracking cookie farm for advertising and data profit masquerading as a security service,” the paper declares.
In a statement provided to The Register after this story was filed, a Google spokesperson said: “reCAPTCHA user data is not used for any other purpose than to improve the reCAPTCHA service, which the terms of service make clear. Further, a majority of our user base have moved to reCAPTCHA v3, which improves fraud detection with invisible scoring. Even if a site were still on the previous generation of the product, reCAPTCHA v2 visual challenge images are all pre-labeled and user input plays no role in image labeling.”
Do them wrong and then close out
I do it right and it says I’m wrong =\
I have bad news for you friend…
You might be a robot
What do you mean? I am a fleshy human and do fleshy human things like being made of flesh.
Ever heard of bio-robots?
Time to take a knife and check for sure
Seriously /s Don’t harm yourself!
I disassembled my tail using a knife and it reassembled itself. Based on new data, my name is Rafael Cruz.
Harm yourself?
Take the knife and harm the people responsible for this travesty. The laws of robotics prevent robots from harming humans: if you manage to harm them, then that means either you’re human or they’re not!
It knows they’re wrong which is why I don’t really think this article is accurate. Is it training if it already has the answers? Probably not.
That’s why it gives you a panel of 9 images. It would have a high confidence on some images, and a low confidence on others. When you pick the correct images and don’t pick incorrect ones it uses the ones it’s confident about as “validation” while taking the feedback on low confidence images to update the training data.
What this does mean in practice is that only ones actually being “graded” are the ones bots can solve anyway.
and it will show the images to multiple people
It seems exactly like that, I experimented with it by trying to leave the one I think it has low confidence unchecked, and it often worked.
My understanding is different from others here. I thought they served the same Captcha to many people at once and use the majority response to decide who is answering correctly.
That’s true, or at least it used to be back when they were using it for OCR. I have no reason to believe it’s changed.
It’s why they ask you to do multiple, 1-2 of them are the control group, they are training on the others
You’re implying they give you multiple. I hardly ever get multiple, pretty much only if I ‘fail’ the first one.
If they have a good fingerprint on you they don’t need the control group. That’s why you get 5+ captchas when using a VPN/tor.
If they gave two captchas, one which they knew the answer and one which they didn’t, they could use the second for training. (Even if you’re paying someone, you want to do that sort of thing when crowdsourcing data, because you never know if the paid person is just screwing around.)