I support free and open source software (FOSS) like VLC, Qbittorrent, LibreOffice, Gimp…

But why do people say that it’s as secure or more secure than closed source software?

From what I understand, closed source software don’t disclose their code.

If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.

But open source has their code available to the entire world on websites like Github or Gitlab.

Isn’t that actually also helping hackers?

  • Tattorack@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    6 hours ago

    Closed source you have to take the word of the owner, who’s out to get your money.

    Open source you have to take the word of a whole community, or you could just go and look yourself.

    In the end there is also a personal responsibility; there is no promise or guarantee in the whole world that’ll prevent you from being stupid with what you download or install.

  • Capricorn_Geriatric@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    10 hours ago

    It’s not “assumed” to be secure.

    It’s out there and visible for all to see. Hopefully, someone knowledgeable has taken it upon themselves to take a look at the software and assess its security.

    The largest projects, like all the ones you named are popular enough that there’s no shortage of people taking a peek.

    Of course, that doesn’t mean actual security audits are uncalled for. They’re necessary. And they’re being done. And with the code out there, any credible auditer will audit all the code, since it’s availiable.

    Compare that to closed-source.

    With closed-source, the code isn’t out there. Anyone can poke around, sure, but that’s like poking a black box with a stick. It’s not out there. You can infer some things, there are some source code leaks, but it isn’t all visible. This is also much less efficient and requires much more work for a fraction of the results.

    The same goes with actual audits. Usually not all source code is given over to the auditers, so some voulnerabilities remain uninspected and dormant.

    Sure, not having the code out there is “security”. If someone doesn’t see the code, it’s much harder to find the weakness. Harder, but not impossible.

    There’s a lot of open-source software. There’s also a lot closed-source software, much more than the open-source kind, in fact.

    What open-sourcing does is increase the number of eyes looking at the code. And each of those eyes could find a weakness. It might be a bad actor, but it’s most likely a good one.

    With open source, any changes are publically visible, and any attempt to sneak a backdoor in has a much higher chance of being seen, again due to the large number of eyes which can see it.

    Closed-source code also gives lazy programmers an easy way out of fixing or not introducing vulnerabilities - “no one will know”. With open source, again, there’s a lot of eyes on the code - not just the one programmer team making it and the other auditing it, as is often the case.

    That’s why open source software is safer in general. Percisely because it’s availiable, attacking it might seem easier. But for every bad actor looking at the code, there’s at least ten people who aren’t. And if they spotted a voulnerability, they’d report it.

    Security with open source is almost always proactive, while with closed source it’s hit-or-miss. Many voulnerabilities have to cause an issue before being fixed.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    ·
    13 hours ago

    A better question may be, why do you assume closed source software is secure? If nobody can see the code, how can we verify it is safe? Don’t they have to be some sort of reverse engineering expert to prove it’s safe?

  • ssillyssadass@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    10 hours ago

    Open source has more eyes looking over the code, more chances to catch some would-be loophole or exploit. Closed source stuff may have a team of qualified engineers, but there’s only so many on that team, and anyone can get tunnel vision.

  • MTK@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    13 hours ago

    What is more secure, a secret knock or an actual lock?

    The lock is something that everyone can lookup, research and learn about. Sure, it means that people can learn to lockpick, but a well designed lock can stumble even the best lockpicks.

    A secret knock is not secure at all, it sounds secure but in reality it is just obscure, and if anyone learns it or it’s simple enough to guess, it becomes meaningless. Even a bad lock will show signs that it was picked.

    So that’s an analogy, here is the actual explanation:

    Let’s assume we have a closed source product named C and an open source product named O and that the security and quality of the code is the same. Both products are compiled and have been in active development for years. Both products have a total of 2 different people going over the code change of each new version, one person writes it, another reviews the code and approves it. After years of development you probably have about 10 people in total who have actually seen the code, anything that they missed will go unnoticed, any corners that they decided to cut will be approved, any bad decisions that they made will not be criticized. Here is where C and O differ: C will forever stay in this situation, only getting feedback rarely from researchers who found vulnerabilities and decided to report them. O will get small parts of it reviewed by hundreds of developers, and maybe even fully reviewed by a few people. Any corners that O cuts will be criticized, any backdoor that O tries to implemented will be clear to see. C on the other hand has one small advantage, bad actors will have a harder time finding vulnerabilities in it because it is compiled and they would have to reverse engineer it, while O is clear for the bad actors to read. But, bad actors are a very small minority, any vulnerability in O is far more likely to be caught by good actors, while C is very unlikely to be reversed by any good actors at all and so if it has any vulnerabilities, they are far more likely to be found by bad actors first.

    And it is important to note the conflict of interests that often exists in closed source software. A company that sells a product for profit and believes that its code is hidden, has very little interest in security and almost no interest in end user security, but if the code is not hidden, the company has an interest to produce reasonably secure code to maintain a reputation.

    So almost always, open source leads to safer code for all parties involved.

  • neons@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    6 hours ago

    Because people assume someone is auditing it. Which is wrong, most of the time nobody is auditing.

  • Nibodhika@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    15 hours ago

    It’s simple really, you have two people selling you a padlock, one has a challenge for anyone who can break it to earn bragging rights, the other comes in a black cardboard box that you can’t remove. Would you lock your stuff with something that tells people “I’m secure, prove me wrong” or with what can be anything from a padlock that will close and never let you open it again to an empty cardboard box that anyone can break with their hands?

    It’s the same thing with software, you need to realize that for every black hat (what people refer to as hackers) out there there are dozens of white hats (security experts that earn their living by finding and reporting security flaws in software). So for open source software that means that the chance of a security issue being found by a white hat is much higher, and if it’s found by a black hat you have millions of people trying to figure out how he did it, where’s the vulnerability and how to fix it. Whereas for closed software you never know if it has been breached, and white hats can’t investigate and find a solution, so you depend on the security team from the company (which is most likely a small team of maybe 5 people of we’re being generous) to figure it out and make a fix.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    1 day ago

    Others have mentioned this, but to make sure all context is clear:

    • FOSS software is not inherently more secure.
    • New FOSS software is probably as secure as any closed source software, because it likely doesn’t have many eyes on it and hasn’t been audited.
    • Mature FOSS software will likely have more CVEs reported against it than a closed source alternative, because there are more eyes on it.
    • Because of bullet 3, mature FOSS software is typically more secure than closed source, as security holes are found and patched publicly.
    • This does not mean a particular closed source tool is insecure, it means the community can’t prove it is secure.
    • I like proof, so I choose FOSS.
    • Most people agree, which is why most major server software is FOSS (or source available)
    • However that’s also because of the permissive licensing.
    • liquefy4931@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      Also keep in mind that employees of companies that release closed source software are obligated to keep secret any gaping security vulnerabilities. This obligation usually comes with heavy legal ramifications that could be considered “life ruining” for many of us. e.g. Loss of your job plus a lawsuit.

      Often, none of the contributors to open source software are associated with each other and therefore have no obligation to keep discovered vulnerabilities a secret. In fact, I would assume that many contributors also actively use the software and have a personal interest in getting security vulnerabilities fixed.

  • lucullus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    ·
    1 day ago

    Otherwise, you need to be some kind of freaking retro-engineering expert.

    Nah, often software is stupidly easy to breach. Often its an openly accessable database (like recently with the Tea app), or that you can pull other data from the webapp just by incrementing or decrementing the ID in your webrequest (that commonly happened with quite a number of digital contact tracing platforms used during Covid).

    Very often the closed source just obscures the screaming security issues.

    And yeah, there are not enough people to thorouhly audit all the open source code. But there are more people doing that, than you think. And another thing to mind is, that reporting a security problem with a software/service can get you in serious legal trouble depending on your jurisdicting - justified or not. Corporations won’t hesitate to slap suit you out of existance, if they can hide the problems that way. With open source software you typically don’t have any problems like this, since collaboration and transparency is more baked in into it.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    59
    ·
    2 days ago

    You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:

    Locksmith #1 says “I have invented my own kind of lock. I haven’t told anyone how it works, the lock picking community doesn’t know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works.”

    Locksmith #2 says "Okay so the best lock we’ve got was designed in the 1980’s, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don’t work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.

    Which lock are you going to buy?

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    178
    ·
    2 days ago

    The code being public helps with spotting issues or backdoors.

    In practice, “security by obscurity” doesn’t really work. The code’s security should hinge on the quality of the code itself, not on the amount of people that know it.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 hours ago

      security by obscurity doesn’t work on its own, but is a single pillar in a multi-faceted security strategy. in the case of FOSS vs closed source, the down sides (not having eyes on it, etc) outweigh the up sides… but writing off security by obscurity (plus other security) in all cases is the wrong approach to take

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      85
      ·
      2 days ago

      It also provides some assurance that the service/project/company is doing what they say they are, instead of “trust us”.

      Meta has deployed code so criminal that everyone who knew about it should be serving hard jail time (if we didn’t live in corporate dictatorships). If their code were public they couldn’t pull shit like this anywhere near as easily.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      44
      ·
      2 days ago

      Yuup. “security by obscurity” relies on the attacker not understanding how software works. Problem is, hackers usually know how software works so that barrier is almost non existent.

    • bamboo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 days ago

      The code being public helps with spotting issues or backdoors.

      A recent example of this is to see the extent that the TALOS group had to do to reverse engineer Dell ControlVault impacting hundreds of models of Dell laptops. This blog post goes through all of the steps they had to take to reverse engineer things, and they note fortunately there was some Linux support with publicly available shared objects with debug symbols, that helped them reverse the ecosystem. Dell has all this source code, and could have identified these issues much more easily themselves, but didn’t and shipped an insecure product leaving the customers vulnerable.

  • steeznson@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    There isn’t a clear divide between open source software and proprietary software anymore due to how complex modern applications are. Proprietary software is typically built on top of open source libraries: Python’s Django web framework, OpenSSL, xz-utils, etc. Basically there isn’t anything safe, and even if you wrote it yourself you could introduce bugs or supply-chain attacks from dependencies.

    • bestboyfriendintheworld@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      11
      ·
      2 days ago

      You theoretically can see the code. You don’t actually look at it. Nor can you even have the knowledge to understand and see security implications for all the software you use.

      In practice it makes little difference for security if you use open or closed source software.

      • Grenfur@pawb.social
        link
        fedilink
        arrow-up
        17
        ·
        2 days ago

        No, you literally can see the code, that’s why it’s open source. YOU may not look at it, but people do. Random people, complete strangers, unpaid and un-vested in the project. The alternative is a company, who pays people to say “Yeah it’s totally safe”. That conflict of interest is problematic. Also, depending on what it’s written in, yes, I do sometimes take the time. Perhaps not for every single thing I run, but any time I run across niche projects, I read first. To claim that someone can’t understand is wild. That’s a stranger on the internet, you’re knowledge of their expertise is 0.

        In practice, 1,000 random people with no reason to “trust you, bro” on the internet being able to audit every change you make to your code is far more trustworthy than a handful of people paid by the company they represent. What’s worse, is that if Microsoft were to have a breach, then like maybe 10 people on the planet know about it. 10 people with jobs, mortgages, and families tied to that knowledge. They won’t say shit, because they can’t lose that paycheck. Compare that to say the XZ backdoor where the source is available and gets announced so people know exactly who what and where to resolve the issue.

  • TabbsTheBat@pawb.social
    link
    fedilink
    arrow-up
    46
    ·
    2 days ago

    It’s because anyone can find and report vulnerabilities, while closed source could have some issue behind closed doors and not mention that data is at risk even if they knew