• 1 Post
  • 234 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle


  • They aren’t. From a comment on https://www.reddit.com/r/ublock/comments/32mos6/ublock_vs_ublock_origin/ by u/tehdang:

    For people who have stumbled into this thread while googling “ublock vs origin”. Take a look at this link:

    http://tuxdiary.com/2015/06/14/ublock-origin/

    "Chris AlJoudi [current owner of uBlock] is under fire on Reddit due to several actions in recent past:

    • In a Wikipedia edit for uBlock, Chris removed all credits to Raymond [Hill, original author and owner of uBlock Origin] and added his name without any mention of the original author’s contribution.
    • Chris pledged a donation with overblown details on expenses like $25 per week for web hosting.
    • The activities of Chris since he took over the project are more business and advertisement oriented than development driven."

    So I would recommend that you go with uBlock Origin and not uBlock. I hope this helps!

    Edit: Also got this bit of information from here:

    https://www.reddit.com/r/chrome/comments/32ory7/ublock_is_back_under_a_new_name/

    TL;DR:

    • gorhill [Raymond Hill] got tired of dozens of “my facebook isnt working plz help” issues.
    • he handed the repository to chrismatic [Chris Aljioudi] while maintaining control of the extension in the Chrome webstore (by forking chrismatic’s version back to himself).
    • chrismatic promptly added donate buttons and a “made with love by Chris” note.
    • gorhill took exception to this and asked chrismatic to change the name so people didn’t confuse uBlock (the original, now called uBlock Origin) and uBlock (chrismatic’s version).
    • Google took down gorhill’s extension. Apparently this was because of the naming issue (since technically chrismatic has control of the repo).
    • gorhill renamed and rebranded his version of ublock to uBlock Origin.

  • Say I go to a furniture store and buy a table. It has a 5 year warranty. 2 years later, it breaks, so I call Ubersoft and ask them to honor the warranty and fix it. If they don’t, then I can file a suit against them, i.e., for breach of contract. I may not even have to file a suit, as there may be government agencies who receive and act on these complaints, like my local consumer protection division.

    I’m talking about real things here. Your example is a situation where the US government agrees that a company shouldn’t be permitted to take my money and then renege on their promises. And that’s generally true of most governments.

    Supposing an absence of regulations protecting consumers like me, like you’re trying to suggest in your example, then it would be reasonable to assume an absence of laws and regulations protecting the corporation from consumers like me. Absent such laws, a consumer would be free to take matters into their own hands. They could go back to Ubersoft and take a replacement table without their agreement - it wouldn’t be “stealing” because it wouldn’t be illegal. If Ubersoft were closed, the consumer could break in. If Ubersoft security tried to stop them, the consumer could retaliate - damaging Ubersoft’s property, physically attacking the owner / management / employees, etc… Ubersoft could retaliate as well, of course - nothing’s stopping them. And as a corporation, they certainly have more power than a random consumer - but at that point they would need to employ their own security forces rather than relying on the government for them.

    Even if we kept laws prohibiting physical violence, the consumer is still regulated by things like copyright and IP protections, e.g., the anti-circumvention portion of the DMCA. Absent such regulations, a consumer whose software was rendered unusable or changed in a way they didn’t like could reverse engineer it, bypass DRM, host their own servers, etc… Given that you didn’t speak against those regulations, I can only infer that you are not opposed to them.

    Why do you think we don’t need regulations protecting consumers but that we do need regulations restricting them?



  • theoretically they can

    Is this a purely theoretical capability or is there actually evidence they have this capability?

    it’s already been proven that they can tap into anyone’s phone

    Listening into a conversation that you’re intentionally relaying across public infrastructure and gaining access to the phone itself are two very different things.

    The use of proprietary software in literally everything

    1. Speak for yourself. And let’s be real, if you’re on Lemmy you’re 10 times more likely to be running Linux.
    2. Proprietary != closed source
    3. Do you really think that just because something is closed source means that it can’t be analyzed?

    the amount of exploits the NSA has on hand

    How many zero-day exploits does the NSA have? How many can be deployed remotely and without a nontrivial action by a user?

    what’s stopping the NSA from spying this much?

    Scale, capacity, cost, number of employees

    —-

    I’m not saying we shouldn’t oppose government surveillance. We absolutely should. But like another commenter pointed out, I’m much more concerned with the amount of data that corporations collect and have.


  • reasonable expectations and uses for LLMs.

    LLMs are only ever going to be a single component of an AI system. We’ve only had LLMs with their current capabilities for a very short time period, so the research and experimentation to find optimal system patterns, given the capabilities of LLMs, has necessarily been limited.

    I personally believe it’s possible, but we need to get vendors and managers to stop trying to sprinkle “AI” in everything like some goddamn Good Idea Fairy.

    That’s a separate problem. Unless it results in decreased research into improving the systems that leverage LLMs, e.g., by resulting in pervasive negative AI sentiment, it won’t have a negative on the progress of the research. Rather the opposite, in fact, as seeing which uses of AI are successful and which are not (success here being measured by customer acceptance and interest, not by the AI’s efficacy) is information that can help direct and inspire research avenues.

    LLMs are good for providing answers to well defined problems which can be answered with existing documentation.

    Clarification: LLMs are not reliable at this task, but we have patterns for systems that leverage LLMs that are much better at it, thanks to techniques like RAG, supervisor LLMs, etc…

    When the problem is poorly defined and/or the answer isn’t as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.

    TBH, so would a random person in such a situation (if they produced anything at all).

    As an example: how often have you heard about a company’s marketing departments over-hyping their upcoming product, resulting in unmet consumer expectation, a ton of extra work from the product’s developers and engineers, or both? This is because those marketers don’t really understand the product - either because they don’t have the information, didn’t read it, because they got conflicting information, or because the information they have is written for a different audience - i.e., a developer, not a marketer - and the nuance is lost in translation.

    At the company level, you can structure a system that marketers work within that will result in them providing more correct information. That starts with them being given all of the correct information in the first place. However, even then, the marketer won’t be solving problems like a developer. But if you ask them to write some copy to describe the product, or write up a commercial script where the product is used, or something along those lines, they can do that.

    And yet the marketer role here is still more complex than our existing AI systems, but those systems are already incorporating patterns very similar to those that a marketer uses day-to-day. And AI researchers - academic, corporate, and hobbyists - are looking into more ways that this can be done.

    If we want an AI system to be able to solve problems more reliably, we have to, at minimum:

    • break down the problems into more consumable parts
    • ensure that components are asked to solve problems they’re well-suited for, which means that we won’t be using an LLM - or even necessarily an AI solution at all - for every problem type that the system solves
    • have a feedback loop / review process built into the system

    In terms of what they can accept as input, LLMs have a huge amount of flexibility - much higher than what they appear to be good at and much, much higher than what they’re actually good at. They’re a compelling hammer. System designers need to not just be aware of which problems are nails and which are screws or unpainted wood or something else entirely, but also ensure that the systems can perform that identification on their own.


  • That’s still a single point of failure.

    So is TLS or the compromise of a major root certificate authority, and those have no bearing on whether an approach qualifies as using 2FA.

    The question is “How vulnerable is your authentication approach to attack?” If an approach is especially vulnerable, like using SMS or push notifications (where you tap to confirm vs receiving a code that you enter in the app) for 2FA, then it should be discouraged. So the question becomes “Is storing your TOTP secrets in your password manager an especially vulnerable approach to authentication?” I don’t believe it is, and further, I don’t believe it’s any more vulnerable than using a separate app on your mobile device (which is the generally recommended alternative).

    What happens if someone finds an exploit that bypasses the login process entirely?

    Then they get a copy of your encrypted vault. If your vault password is weak, they’ll be able to crack it and get access to everything. This is a great argument for making sure you have a good vault password, but there are a lot of great arguments for that.

    Or do you mean that they get access to your logged in vault by compromising your device? That’s the most likely worst case scenario, and in such a scenario:

    • all of your logged in accounts can be compromised by stealing your sessions
    • even if you use a different app for your 2FA, those TOTP secrets and passkeys can be stolen - they have to be on a different device
    • you’re also likely to be subject to a ransomware attack

    In other words, your only accounts that are not vulnerable in this situation solely because their TOTP secret is on a different device are the ones you don’t use on that device in the first place. This is mostly relevant if your computer is compromised - if your phone is compromised, then it doesn’t matter that you use a separate password manager and authenticator app.

    If you use an account on your computer, since it can be compromised without having the credentials on device, you might as well have the credentials on device. If you’re concerned about the device being compromised and want to protect an account that you don’t use on that device, then you can store the credentials in a different vault that isn’t stored on your device.

    Even more common, though? MITM phishing attacks. If your password manager verifies the url, fills the password, and fills your TOTP, then that can help against those. Start using a different device and those protections fall away. If your vault has been compromised and your passwords are known to an attacker, but they don’t have your TOTP secrets, you’re at higher risk of erroneously entering them into a phishing site.

    Either approach (same app vs different app) has trade-offs and both approaches are vulnerable to different sorts of attacks. It doesn’t make sense to say that one counts as 2FA but the other doesn’t. They’re differently resilient - that’s it. Consider your individual threat model and one may be a better option than the other.

    That said, if you’re concerned about the resiliency of your 2FA approach, then look into using dedicated security keys. U2F / WebAuthn both give better phishing resistance than a browser extension filling a password or TOTP can, and having the private key inaccessible can help mitigate device compromise concerns.




  • The list of instances you shared was updated recently, but I tried the one url in it (the rest are onion links or i2p, and are older versions of libreddit to boot) and the page didn’t even load.

    Libreddit has been discontinued for nearly a year due to not working thanks to Reddit’s API changes, though about a month ago they updated their repo to direct people to RedLib, which allegedly does work. That said, I tried the official instance and got an error. However, it’s being actively developed and looks easy to self-host. I don’t know if there’s a list of unofficial public instances.



  • I haven’t worked with Scribus but I’ve heard good things about it, so I don’t think you’d be making a wrong choice by going with it. For this use case, the main reasons I can think of for why LaTeX would be preferable would be:

    • if you preferred working with it, or with a particular LaTeX tool
    • if you want to learn one tool or the other
    • if being able to write a script to create the output is something you want to do and the equivalent is not possible in Scribus




  • You can also get replacement Hall effect analog sticks from Gulikit and install them in your joycons yourself. They also made them for the Steam Deck. I installed a set in my old LCD Steam Deck and it was really straightforward, but I suspect the joycons take a bit more work.

    It’s a shame they don’t make them for the PS5 - there are multiple third party controllers with Hall effect sensors that are compatible with pretty much everything else, but there’s only one Hall effect controller compatible with the PS5 (the Nacon Revolution 5 Pro), and it’s $200.


  • If you use that docker compose file, I recommend you comment out the build section and uncomment the image section in the lemmy service.

    I also recommend you use a reverse proxy and Docker networks rather than exposing the postgres instance on port 5433, but if you aren’t familiar with Docker networks you can leave it as is for now. If you’re running locally and don’t open that port in your router’s firewall, it’s a non-issue unless there’s an attacker on your LAN, but given that you’re not gaining anything from exposing it (unless you need to connect to the DB directly regularly - as a one off you could temporarily add the port mapping), it doesn’t make sense to increase your attack surface for no benefit.


  • I’m not the person you responded to, but I can say that it’s a perfectly fine take. My personal experience and the commonly voiced opinions about both browsers supports this take.

    Unless you’re using 5 tabs max at a time, my personal experience is that Firefox is more than an order of magnitude more memory efficient than Chrome when dealing with long-lived sessions with the same number of tabs (dozens up to thousands).

    I keep hundreds of tabs open in Firefox on my personal machine (with 16 GB of RAM) and it’s almost never consuming the most memory on my system.

    Policy prohibits me running Firefox on my work computer, so I have to use Chrome. Even with much more memory (both on 32 GB and 64 GB machines) and far fewer tabs (20-30 at most vs 200-300), Chrome often ends up taking up far too much memory + having a substantial performance drop, and I have to to through and prune the tabs I don’t need right now, bookmark things that can be done later, etc…

    Also, see https://www.techspot.com/news/102871-zero-regrets-firefox-power-user-kept-7500-tabs.html - I’ve never seen anything similar for Chrome and wasn’t able to find anything.


  • Definitely not, I do the same.

    I installed 64 GB of RAM in my Windows laptop 4 years ago and had been using 64 GB of RAM in the laptop that it replaced - which was from 2013 (I think I bought it in 2014-2105). I was using 32 GB of RAM prior (on Linux and Windows laptops), all the way back to 2007 or so.

    My work MacBook Pros generally have 32-64 GB of RAM, but my personal MacBook Air (the 15” M2) has 16 GB, simply because the upgrade wasn’t a cost effective one (and the M1 before it had performed great with 16) and because I’d only planned on using it for casual development. But since I’ve been using it as my main personal development machine and for self-hosted AI, and have run into its limits, when I replace it I’ll likely opt for 64 GB or more.

    My Windows gaming desktop only has 32 GB of RAM, though - that’s because getting the timings higher with more RAM - particularly 4 sticks - was prohibitively expensive when I built it, and then when the cost wasn’t a concern and I tried to upgrade, I learned that my third and fourth RAM slots weren’t functional. I could upgrade to 64 GB in two slots but it wouldn’t really be worth it, since I only use it for gaming.

    My Linux desktop / server has 128 GB of ECC RAM, though, because that’s as much as the motherboard supported.