The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It’s not. It’s a structural problem with how we evaluate trust in self-hosted software.

Here’s the actual issue:

Docker Hub tells you almost nothing useful about security.

The ‘Verified Publisher’ badge verifies that the namespace belongs to the organization. That’s it. It says nothing about what’s in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.

Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There’s no notification when a tag gets repointed. If you’re pulling by tag in production (or in your homelab), you’re trusting a promise that can be silently broken.

The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.

The Huntarr case specifically:

Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack’s API keys in cleartext. The container runs as root. There’s a Zip Slip. The maintainer’s response was to ban the reporter.

None of this would have been caught by Docker Hub’s trust signals, because Docker Hub’s trust signals don’t evaluate code. They evaluate namespace ownership.

What would actually help:

  • Pull by digest, not tag. Pin your compose files.
  • Check whether the image is built from a public, auditable Dockerfile. If the build process is opaque, that’s a signal.
  • Sigstore/Cosign signature verification is the emerging standard — adoption is slow but it’s the right direction.
  • Reproducible builds are the gold standard. Trust nothing, verify everything.

The uncomfortable truth: most of us are running images we’ve never audited, pulled from a registry whose trust signals we’ve never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.

  • Sir. Haxalot@nord.pub
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 hours ago

    I’m like 90% sure that this post is AI Slop, and I just love the irony.

    First of all, the writing style reads a lot like AI… but that is not the biggest problem. None of the mitigations mentioned has anything to do with the Huntarr problem. Sure, they have their uses, but the problem with Huntarr was that it was a vibe coded piece of shit. Using immutable references, image signing or checking the Dockerfile would do fuck-all about the problem that the code itself was missing authentication on some important sensitive API Endpoints.

    Also, Huntarr does not appear to be a Verified Publisher at all. Did their status get revoked, or was that a hallucination to begin with?

    To be fair though the last paragraph does have a point, but for a homelab I don’t think it’s feasible to fully review the source code of everything you install. It would rather come down to being careful with things that are new and doesn’t have an established reputation, which is especially a problem in the era of AI coding. Like the rest of the *arr stack is probably much safer because it’s open source projects that have been around for a long time and had had a lot of eyes on it.

  • porkloin@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    I know it’s not the issue here really but

    the container runs as root

    That’s why we need to push for more self hosted containers to support running rootless. There’s no reason for it other than laziness IMHO.

    It’s wild to me how many people will jump through a bunch of other random security hoops but not blink an eye about running containers as root

  • Hawk@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    With an API that just runs unauthenticated, I’m unsure what any of these suggestions is supposed to improve here.

  • bdonvr@thelemmy.club
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    3 hours ago

    Pinning your versions just means updating will be a pain, and you’ll probably start running outdated containers that are security risks.

    It’s not like you’re doing code audits every updates anyway. Just use containers that are established and seem trustworthy. It’s all you can really do.

    • androidul@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      sure, but Renovate can be used in such scenarios. MR is open, scan is triggered in the CI/CD pipeline and that’s how you verify

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      I believe they are talking about this.

      If you have it at all exposed to the internet, you should probally terminate it

      As a summery: Multiple endpoints on the software don’t check for authentication and an unauthenticated person can retrieve your complete settings configuration including your API keys and your password and also change your current configuration, Just by sending a simple POST request.

      That’s wild to me that that was something that was able to be done.

      • MIXEDUNIVERS@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        ah yes i have googled it and found the reddit post, when i come home i remove it.it dindn 't have that many funktions i needed, but i did like that it was a controll dashboard.

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    27
    ·
    8 hours ago

    Pull by digest just ensures that people end up running an ancient version, vulnerabilities and all long after any issues were patched, so that isn’t a one-size-fits-all solution either.

    Most projects are well behaved, so pulling latest makes sense, they likely have fixes that you need. In the case of an actually malicious project, the answer is to not run it at all. Huntarr showed their hand, you cannot trust any of their code.

    • wilo108@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      I use digests in my docker compose files, and I update them when new versions are released (after reading the release notes) 🤷

      • BradleyUffner@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Is manually upsetting based on trusting the accuracy of the release notes any more secure than just trusting “latest”?

      • suicidaleggroll@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        6 hours ago

        Unfortunately that approach is simply not feasible unless you have very few containers or you make it your full time job.

        • wilo108@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          I dunno, I’ve never found it all that onerous.

          I have a couple of dozen (perhaps ~50) containers running across a bunch of servers, I read the release notes via RSS so I don’t go hunting for news of updates or need to remember to check, and I update when I’m ready to. Security updates will probably be applied right away (unless I’ve read the notes and decided it’s not critical for my deployment(s)), for feature updates I’ll usually wait a few days (dodged a few bullets that way over the years) or longer if I’m busy, and for major releases I’ll often wait until the first point release unless there’s something new I really want.

          Unless there are breaking changes it takes a few moments to update the docker-compose.yaml and then dcp (aliased to docker compose pull) and dcdup (aliased to docker compose down && docker compose up -d && docker compose logs -f).

          I probably do spend upwards of maybe 15 or 20 minutes a week under normal circumstances, but it’s really not a full time job for me 🤷.

  • SavvyWolf@pawb.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    8 hours ago

    I don’t think those are sufficient. We could prove that a given binary can be produced from a given repo commit, but that doesn’t actually ensure that the code itself is safe. Malicious code is malicious code even if it’s reproducible.