Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.

The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.

What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.

API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.

Self-hosting options:

  • USB drive / local folder (just open the HTML files)
  • Home server on your LAN
  • Tor hidden service (2 commands, no port forwarding needed)
  • VPS with HTTPS
  • GitHub Pages for small archives

Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.

Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.

How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.

Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)

Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4

    • 19-84@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      58 minutes ago

      redarc uses reactjs to serve the web app, redd-archiver uses a hybrid architecture that combines static page generation with postgres search via flask. is more like a hybrid static site generator with web app capabilities through docker and flask. the static pages with sorted indexes can be viewed offline and served on hosts like github and codeberg pages.

  • 19-84@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    PLEASE SHARE ON REDDIT!!! I have never had a reddit account and they will NOT let me post about this!!

  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    3
    ·
    4 hours ago

    Just so you’re aware, it is very noticeable that you also used AI to help write this post and its use of language can throw a lot of people off.

    Not to detract from your project, which looks cool!

  • Tanis Nikana@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 hours ago

    Reddit is hot stinky garbage but can be useful for stuff like technical support and home maintenance.

    Voat and Ruqqus are straight-up misinformation and fascist propaganda, and if you excise them from your data set, your data will dramatically improve.

  • SteveCC@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    3 hours ago

    Wow, great idea. So much useful information and discussion that users have contributed. Looking forward to checking this out.

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    21
    ·
    4 hours ago

    And only a 3.28 TB database? Oh, because it’s compressed. Includes comments too, though.

      • irmadlad@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        I use Reddit for reference through RedLib. I could see how having an on-premise repository would be helpful. How many subs were scrapped in this 3.28 TB backup? Reason for asking, I’d have little interest in say News or Politics, but there are some good subs that deal with Linux, networking, selfhosting, some old subs I used to help moderate like r/degoogle, r/deAmazon, etc.

        • 19-84@lemmy.dbzer0.comOP
          link
          fedilink
          English
          arrow-up
          10
          ·
          3 hours ago

          the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent 🙂

  • Howlinghowler110th@kbin.earth
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    3 hours ago

    I think this is a good use case for AI and Impressed with it. wish the instructions were more clear how to set up though.