Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.

The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.

What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.

API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.

Self-hosting options:

  • USB drive / local folder (just open the HTML files)
  • Home server on your LAN
  • Tor hidden service (2 commands, no port forwarding needed)
  • VPS with HTTPS
  • GitHub Pages for small archives

Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.

Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.

How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.

Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)

Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4

    • 19-84@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 minutes ago

      2005-06 to 2024-12

      however the data from 2025-12 has been released already, it just needs to be split and reprocessed for 2025 by watchful1. once that happens then you can host archive up till end of 2025. i will probably add support for importing data from the arctic shift dumps instead so that archives can be updated monthly.

  • a1studmuffin@aussie.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    This seems especially handy for anyone who wants a snapshot of Reddit from pre-enshittification and AI era, where content was more authentic and less driven by bots and commercial manipulation of opinion. Just choose the cutoff date you want and stick with that dataset.

    • irmadlad@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      49 minutes ago
      spoiler

      Maybe read where OP says ‘Yes I used AI, English is not my first language.’ Furthermore, are ethnic slurs really necessary here?

  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    2
    ·
    7 hours ago

    Just so you’re aware, it is very noticeable that you also used AI to help write this post and its use of language can throw a lot of people off.

    Not to detract from your project, which looks cool!

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 seconds ago

        You’re awesome. AI is fun and there’s nothing wrong with using it especially how you did. Lemmy was hit hard with AI hate propaganda. China probably trying to stop it’s growth and development in other countries or some stupid shit like that. But you’re good. Fuck them

  • 19-84@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    6 hours ago

    PLEASE SHARE ON REDDIT!!! I have never had a reddit account and they will NOT let me post about this!!

  • SteveCC@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    7 hours ago

    Wow, great idea. So much useful information and discussion that users have contributed. Looking forward to checking this out.

  • Tanis Nikana@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    7 hours ago

    Reddit is hot stinky garbage but can be useful for stuff like technical support and home maintenance.

    Voat and Ruqqus are straight-up misinformation and fascist propaganda, and if you excise them from your data set, your data will dramatically improve.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    27
    ·
    7 hours ago

    And only a 3.28 TB database? Oh, because it’s compressed. Includes comments too, though.

    • 19-84@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      5 hours ago

      redarc uses reactjs to serve the web app, redd-archiver uses a hybrid architecture that combines static page generation with postgres search via flask. is more like a hybrid static site generator with web app capabilities through docker and flask. the static pages with sorted indexes can be viewed offline and served on hosts like github and codeberg pages.

  • Howlinghowler110th@kbin.earth
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    7 hours ago

    I think this is a good use case for AI and Impressed with it. wish the instructions were more clear how to set up though.

    • Gerudo@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      Say what you will about Reddit, but there is tons of information on that platform that’s not available anywhere else.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        2 hours ago

        :-/

        You can definitely mine a bit of gold out of that pile of turds. But you could also go to the library and receive a much higher ratio of signal to noise.

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 hours ago

        I use Reddit for reference through RedLib. I could see how having an on-premise repository would be helpful. How many subs were scrapped in this 3.28 TB backup? Reason for asking, I’d have little interest in say News or Politics, but there are some good subs that deal with Linux, networking, selfhosting, some old subs I used to help moderate like r/degoogle, r/deAmazon, etc.

        • 19-84@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          16
          ·
          7 hours ago

          the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent 🙂