Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.
The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.
What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.
API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.
Self-hosting options:
- USB drive / local folder (just open the HTML files)
- Home server on your LAN
- Tor hidden service (2 commands, no port forwarding needed)
- VPS with HTTPS
- GitHub Pages for small archives
Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.
Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.
How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.
Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)
Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4
Fuck Reddit and Fuck Spez.
PLEASE SHARE ON REDDIT!!! I have never had a reddit account and they will NOT let me post about this!!
Just so you’re aware, it is very noticeable that you also used AI to help write this post and its use of language can throw a lot of people off.
Not to detract from your project, which looks cool!
Yes I used AI, English is not my first language. Thank you for the kind words!
How does this compare to redarc? It seems to be similar.
redarc uses reactjs to serve the web app, redd-archiver uses a hybrid architecture that combines static page generation with postgres search via flask. is more like a hybrid static site generator with web app capabilities through docker and flask. the static pages with sorted indexes can be viewed offline and served on hosts like github and codeberg pages.
Reddit is hot stinky garbage but can be useful for stuff like technical support and home maintenance.
Voat and Ruqqus are straight-up misinformation and fascist propaganda, and if you excise them from your data set, your data will dramatically improve.
the great part is that since everything is built it is easy to support any additional data! there is even an issue template to submit new data source! https://github.com/19-84/redd-archiver/blob/main/.github/ISSUE_TEMPLATE/submit-data-source.yml
Wow, great idea. So much useful information and discussion that users have contributed. Looking forward to checking this out.
thank you!!! i built on great ideas from others! i cant take all the credit 😋
And only a 3.28 TB database? Oh, because it’s compressed. Includes comments too, though.
Yes! Too many comments to count in a reasonable amount of time!
I would sooner download a tire fire.
Say what you will about Reddit, but there is tons of information on that platform that’s not available anywhere else.
:-/
You can definitely mine a bit of gold out of that pile of turds. But you could also go to the library and receive a much higher ratio of signal to noise.
thanks anyway for looking at my project 🙂
I use Reddit for reference through RedLib. I could see how having an on-premise repository would be helpful. How many subs were scrapped in this 3.28 TB backup? Reason for asking, I’d have little interest in say News or Politics, but there are some good subs that deal with Linux, networking, selfhosting, some old subs I used to help moderate like r/degoogle, r/deAmazon, etc.
the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent 🙂
Sweet! I’ll check it out.
I think this is a good use case for AI and Impressed with it. wish the instructions were more clear how to set up though.
thank you! the instruction are little overwhelming, check out the quickstart if you haven’t yet! https://github.com/19-84/redd-archiver/blob/main/QUICKSTART.md








