• 1 Post
  • 36 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle


  • Unless something changed in the specification since I read it last, the attested environment payload only contains minimal information. The only information the browser is required to send about the environment is that: this browser is {{the browser ID}}, and it is not being used by a bot (e.g. headless Chrome) or automated process.

    Depending on how pedantic people want to be about the definition of DRM, that makes it both DRM and not DRM. It’s DRM in the sense that it’s “technology to control access to copyrighted material” by blocking bots. But, it’s not DRM in the sense that it “enables copyright holders and content creators to manage what users can do with their content.”

    It’s the latter definition that people colloquially know DRM as being. When they’re thinking about DRM and its user-hostility, they’re thinking about things like Denuvo, HDCP, always-online requirements, and soforth. Technologies that restrict how a user interacts with content after they download/buy it.

    Calling web environment integrity “DRM” is at best being pedantic to a definition that the average person doesn’t use, and at worst, trying to alarm/incite/anger readers by describing it using an emotionally-charged term. As it stands right now, once someone is granted access to content gated behind web environment integrity, they’re free to use it however they want. I can load a website that enforces WEI and run an adblocker to my heart’s content, and it can’t do anything to stop that once it serves me the page. It can’t tell the browser to disable extensions, and it can’t enforce integrity of the DOM.

    That’s not to say it’s harmless or can’t be turned into user-hostile DRM later, though. There’s a number of privacy, usability, ethical, and walled-garden-ecosystem concerns with it right now. If it ever gets to widespread implementation and use, they could later amend it to require sending an extra field that says “user has an adblocker installed”. With that knowledge, a website could refuse to serve me the page—and that would be restricing how I use the content in the sense that my options then become their way (with disabled extensions and/or an unmodified DOM) or the highway.

    The whole concept of web environment integrity is still dubious and reeks of ulterior motives, but my belief is that calling it “DRM” undermines efforts to push back against it. If the whole point of its creation is to lead way to future DRM efforts (as the latter definition), having a crowd of people raising pitchforks over something they incorrectly claim it does it just gives proponents of WEI an excuse to say “the users don’t know what they’re talking about” and ignore our feedback as being mob mentality. Feedback pointing out current problems and properly articulating future concerns is a lot harder to sweep under the rug.



  • The image needs to have already been downloaded the moment the client even fetches it, or you can use the image to track of a particular user is online/has read the message.

    Oh wow… That’s an excellent point. And even if the client downloads it the moment it fetches the message, that would still be enough to help determine when somebody is using Lemmy. I don’t think advertisers would have a reason to do that1, but I wouldn’t put it past a malicious individual to use it to create a schedule of when somebody else is active.

    1 It’s probably easier for them to host their own instance and track the timestamp of when somebody likes/dislikes comments and posts since that data is shared through federation.

    This needs to be implemented in the backend. Images already get downloaded to and served from the server’s pictr-rs store in some instances, so there’s code to handle this problem already.

    That would be ideal, I agree. This comment on the GitHub issue explains why some instances would want the ability to disable it, though. If it does eventually get implemented, having Sync as a fallback for instances where media proxying is disabled would be a major benefit for us Sync users.

    A small side note: that comment also points out a risk of a media proxy running the risk of downloading illegal media. I don’t necessarily think lj would need to worry about it in the same way, though. From my understanding, the risk with that is that an instance would download the media immediately after receiving a local or federated post pointing it. An on-demand proxy would (hopefully) not run the same risk since it would require action (or really bad timing) on the part of a user.

    On the other hand, such a system would also pose a privacy problem: suppose someone foolishly believes Lemmy’s messaging feature is secure and sends a message with personal pictures (nudes, medical documents, whatever). Copying that data around to other servers probably isn’t what you want.

    Fair, but it’s a bit of a moot point. Sending the message between instances is already copying that data around, and even if it’s between two users of a single instance, it’s not end-to-end encrypted. Instance admins can see absolutely everything their users do.

    Orbot can do per-app VPNs for free if you’re willing to take the latency hit.

    Interesting! I wasn’t aware that there were any Android VPNs capable of doing per-app tunneling.


  • Thank you for making an informative and non-alarmist website around the topic of Web Environment Integrity.

    I’ve seen (and being downvoted for arguing against) so many articles, posts, and comments taking a sensationalized approach to the discussion around it, and it’s nice to finally see some genuine and wholly factual coverage of it.

    I really can’t understate how much I appreciate your efforts towards ethical reporting here. You guys don’t use alarm words like “DRM,” and you went through the effort of actually explaining both what WEI does and how it poses a risk for the open web. Nothing clickybaity, ragebaity, and you don’t frame it dishonesty. Just a good, objective description of what it is in its current form and how that could be changed to everything people are worried about.

    Is there anything that someone like me could help contribute with? It seems like our goals (informing users without inciting them, so they can create useful feedback without FUD and misinformation) align, and I’d love to help out any way I can. I read the (at the time incomplete) specs and explainer for WEI, and I could probably write a couple of paragraphs going over what they promised or omitted. If you check my post history, I also have a couple of my own example of how the WEI spec could be abused to harm users.



  • For spoofing the user agent, I still think that some level of obscurity could help. The IP address is the most important part, but when sharing an internet connection with multiple people, knowing which type/version of device would help disambiguate between people with that IP (for example, a house with an Android user and an iPhone user). I wouldn’t say not having the feature is a deal breaker, but I feel like any step towards making it harder to serve targeted ads is a good step.

    Fair point on just using a regular VPN, but I’m hoping for something a bit more granular. It’s not that all traffic would need to be proxied, though. If I use some specific Lemmy instance or click on an image/link, that was my choice to trust those websites. The concern here is that simply scrolling past an embedded image will make a request to some third-party website that I don’t trust.






  • And here’s a concern about the decentralized-but-still-centralized nature of attesters:

    From my understanding, attesting is conceptually similar to how the SSL/TLS infrastructure currently works:

    • Each ultimately-trusted attester has their own key pair (e.g. root certificate) for signing.

    • Some non-profit group or corporation collects all the public keys of these attesters and bundles them together.

    • The requesting party (web browser for TLS, web server for WEI) checks the signature sent by the other party against public keys in the requesting party’s bundle. If it matches one of them, the other party is trusted. If it doesn’t, they are not not trusted.

    This works for TLS because we have a ton of root certificates, intermediate certificates, and signing authorities. If CA Foo is prejudice against you or your domain name, you can always go to another of the hundreds of CAs.

    For WEI, there isn’t such an infrastructure in place. It’s likely that we’ll have these attesters to start with:

    • Microsoft
    • Apple
    • Google

    But hey, maybe we’ll have some intermediate attesters as well:

    • Canonical
    • RedHat
    • Mozilla
    • Brave

    Even with that list, though, it doesn’t bode well for FOSS software. Who’s going to attest to various browser forks, or for browsers running on different operating systems that aren’t backed by corporations?

    Furthermore, if this is meant to verify the integrity of browser environments, what is that going to mean for devices that don’t support Secure Boot? Will they be considered unverified because the OS can’t ensure it wasn’t tampered with by the bootloader?


  • Adding another issue to the pile:

    Even if it isn’t the intent of the spec, it’s dangerous to allow for websites to differentiate between unverified browsers, browsers attested to by party A, and browser attested to by party B. Providing a mechanism for cryptographic verification opens the door for specific browsers to be enforced for websites.

    For a corporate example:

    Suppose we have ExampleTechFirm, a huge investor in a private AI company, ShutAI. ExampleTechFirm happens to also make a web browser, Sledge. ExampleTechFirm could exert influence on ShutAI so that ShutAI adds rate limiting to all browsers that aren’t verified with ShutAI as the attester. Now, anyone who isn’t using Sledge is being given a degraded experience. Because attesting uses cryptographic signatures, you can’t bypass this user-hostile quality of service mechanism; you have to install Sledge.

    For a political example:

    Consider that I’m General Aladeen, the leader of the country Wadiya. I want to spy on my citizens and know what all of them are doing on their computers. I don’t want to start a revolt by making it illegal to own a computer without my spyware EyeOfAladeen, nor do I have the resources to do that.

    Instead, I enact a law that makes it illegal for companies to operate in Wadiya unless their web services refuse access to Wadiyan citizens that aren’t using a browser attested to by the “free, non-profit” Wadiyan Web Agency. Next, I have my scientists create and release a renamed versions of Chromium and Firefox with EyeOfAladeen bundled in them. Those are the only two browsers that are attested by the Wadiyan Web Agency.

    Now, all my citizens are being encouraged to unknowingly install spyware. Goal achieved!


  • Fair and respectable points, but I don’t think we’re going to see eye to eye on this. It seems like we have different priorities when it comes to reporting on issues.

    Honestly, I don’t disagree with you in thinking that the ulterior motive of the proposal is to undermine user freedom, user privacy, and/or ad blockers. Given Google’s history with Manifest V3 and using Chrome’s dominance to force vendors to adopt out-of-spec changes to web standards (passive scroll listeners come to mind), it would be burying my head in the sand to expect otherwise. My issue here is with portraying speculation and personal opinions as objective truths. Even if I agree that a locked down web is the most likely outcome, it’s just not a fact until someone working on that proposal outright says it was their intent, or it actually happens.

    That doesn’t mean I think we should ignore the Doomsday device factory until it starts creating Doomsday devices, either, though. Google will never outright state that is their goal to cripple adblockers or control the web, and if it comes to happen, they’ll just rely on corporate weasel words to claim that they never promised they wouldn’t. And since we can’t trust corporations to be transparent and truthful, we shouldn’t be taking their promises or claims at face value. You’re absolutely right about that.

    Going back to reporting about this kind of stuff, though: It’s not wrong for the original post to look past the surface-level claims, or for people to point out the corporate speak and lack of commitment. If there’s a factory labeled “Not Doomsday Devices” that pinkie promises they aren’t building Doomsday devices, I definitely would want someone to bring attention to it. I just don’t think the right way to do it is with a pitchfork-wielding mob of angry citizens who were told the factory is unquestionably building anthrax bioweapons, however.

    We don’t gain much from readers being told things that will worry them and piss them off. I mean—sure—there’s now more awareness about the issue. But it’s not actually all that constructive if they aren’t critically engaging with the proposal. Google and web standards committees aren’t going to listen to a bunch of angry Lemmy users reiterating the same talking points over and over. They’re just going to treat it as a brigade and block further feedback until people forget about it (which they did).

    If the topic was broached in a balanced and accurate way that refrained from making conclusions before providing readers with the facts, there would be less knee-jerk reactions. Maybe this is just me being naive, but I think it’s more likely that Google would be receptive to well-thought-out, respectful criticism as opposed to a significant quantity of hostile accusations.

    With that being said, I will concede that I overcorrected for the original post too much. I should have written a response covering the issue in a way that I found more ideal, rather than trying to balance out the bias from the original post. My goal was to point out the ragebait title and add missing information so readers could come to their own informed conclusions, not defend Google.



  • Did you read until the end, or was it more important to accuse me of either being stupid or a corporate shill? I have nothing against you, and I don’t see how it’s constructive to be hostile towards me.

    I said that the proposal itself does not aim to be DRM or adblock repellent, and cited the text directly from the document. It’s possible that something got lost in communication, but that wasn’t me trying to suggest that we should just blindly trust that this proposal has the users’ best interests at heart, or that motivations behind creating it could never, ever be disingenuous.

    Hell, I even made sure to edit my post to clarify how the proposal—if implemented—could be used to prevent ad blockers. The paragraphs right after the one you quoted say:

    To elaborate on the consequences of the proposal…

    Could it be used to prevent ad blocking? Yes. There are two hypothetical ways this could hurt adblock extensions:

    1. As part of the browser “environment” data, the browser could opt to send details about whether built-in ad-block is enabled, any ad-block extensions are enabled, or even if there are any extensions installed at all.

    Knowing this data and trusting it’s not fake, a website could choose to refuse to serve contents to browsers that have extensions or ad blocking software.

    1. This could lead to a walled-garden web. Browsers that don’t support the standard, or minority usage browsers could be prevented from accessing content.

    Websites could then require that users visit from a browser that doesn’t support adblock extensions.


  • Given Google’s history, the assertion made by the title isn’t wrong. That doesn’t mean that it’s objective and informative, however.

    The title suggests that the intent is to create DRM for web pages and “make ad blockers near-impossible”. From an informational standpoint, it correctly captures the likely consequences that would occur should the proposal be implemented. What it (nor the post body) does not do is provide an explanation, information, or context to explain why the proposal demonstrates the claim that is being made.

    The reader is not informed about Google’s history of trying to subvert ad blockers, nor are they shown how the proposal will lead to DRMed web pages and adblock prevention. The post is a reaction-inducing title followed by a link to a proposal and angry comments on GitHub. That’s not informative; that’s ragebait.

    Suppose I give the post the benefit of the doubt, and consider the bar for being “informative” to be simply letting people know about something. It’s still not objective. I’m not saying the OP should support Google or downplay the severity of the proposal, but they could have got the same point across without including their own prejudices:

    “Google engineers propose new web standard that would enable websites to prevent access from browsers running adblockers or website-altering extentions.”

    For the record: I agree with what this post is trying to say. I just disagree with how it’s said. Lemmy isn’t hemorrhaging ad money, and it isn’t overwhelmingly noisy. We don’t need to bring over toxic engagement tactics to generate views.


  • Oh, for sure. When bullet point number one involves advertising, they don’t make it hard to see that the underlying motivation is to assist advertising platforms somehow.

    I think this is an extremely slippery and dangerous slope to go down, and I’ve commented as such and explained how this sort of thing could end up harming users directly as well as providing ways to shut out users with adblocking software.

    But, that doesn’t change my opinion that the original post is framed in a sensationalized manner and comes across as ragebaiting and misinforming. The proposal doesn’t directly endorse or enable DRMing of web pages and their contents, and the post text does not explain how the conclusion of adblockers being killed follows from the premise of the proposal being implemented. To understand how OP came to that conclusion, I had to read the full document, read the feedback on the GitHub issues, and put myself in the shoes of someone trying to abuse it. Unfortunately, not everyone will take the time to do that.

    As an open community, we need to do better than incite anger and lead others into jumping to conclusions. Teach and explain. Help readers understand what this is all about, and then show them how these changes would negatively impact them.


  • Having thought about it for a bit, it’s possible for this proposal to be abused by authoritarian governments.

    Suppose a government—say, Wadiya—mandated that all websites allowed on the Wadiyan Internet must ensure that visitors are using a list of verified browsers. This list is provided by the Wadiyan government, and includes: Wadiya On-Line, Wadiya Explorer, and WadiyaScape Navigator. All three of those browsers are developed in cooperation with the Wadiyan government.

    Each of those browsers also happen to send a list of visited URLs to a Wadiyan government agency, and routinely scan the hard drive for material deemed “anti-social.”

    Because the attestations are cryptographically verified, citizens would not be able to fake the browser environment. They couldn’t just download Firefox and install an extension to pretend to be Wadiya Explorer; they would actually have to install the spyware browser to be able to browse websites available on the Wadiyan Internet.


  • In my other comments, I did say that I don’t trust this proposal either. I even edited the comment you’re replying to to explain how the proposal could be used in a way to hurt adblockers.

    My issue is strictly with how the original post is framed. It’s using a sensationalized title, doesn’t attempt to describe the proposal, and doesn’t explain how the conclusion of “Google […] [wants] to introduce DRM for web pages” follows the premise (the linked proposal).

    I wouldn’t be here commenting if the post had used a better title such as “Google proposing web standard for web browser verification: a slippery slope that may hurt adblockers and the open web,” summarized the proposal, and explained the potential consequences of it being implemented.