Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

  • Dhoard@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    Theoretically speaking, if a website has the archives, what is stopping people from downloading each file on a page by page bases from the archive?

    Edit: Never mind to this I saw a full list of URLs that arhive managed to save and it is missing a lot.

    • DigitalForensick@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      nothing, but event the archived pages arent 100% because some of the files were “faked” in the paginated file lists on the DOJ site. it does work well enough though. I did this to recover all the court records and FOIA files

  • DigitalForensick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    For anyone looking into doing some OSINT work, this is an epic file EFTA00809187

    It contains lists of ALL know JE emails, usernames, websites, social medias, etc from that time

  • ArzymKoteyko@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    11 hours ago

    Hi every one, maybe I’m a bit late to this, but I wanted to share my findings. I parsed every page up to 40k in DS9 3 times and results matched by distribution with PeoplesElbow findings (no content after page 14k and a lot of dublications) BUT I parsed 4 times more unique urls 246_079 (still 2x short of official size). And a strange thing is that on second pass (one day after the first one) I started receiving new urls on old pages.

    Here is stat by file type:

     count  | file type 
    --------+------
          1 | ts
          8 | mov
        236 | mp4
     244326 | pdf
         73 | m4a
          1 | vob
          1 | docx
          1 | doc
          9 | m4v
       1422 | avi
          1 | wmv
    
    • DigitalForensick@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Nice work man! I also discovered something yesterday that I think is worth pointing out.

      DUPLICATE FILES: Within the datasets, there are often emails, doc scans, etc that are duplicate entries. (Im not talking about multi torrent stitching, but actual duplicate documents within the raw dataset.) **These duplicates mustbe preserved. ** When looking at two copies of the same duplicate file, I found that sometimes the redactions are in different places! This can be used to extract more info later down the road.

  • Xenom0rph@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    17 hours ago

    I’m still seeding the partial Dataset 9 (45.63GB and 89.54GB) and all the other datasets. Is there a newer dataset 9 available?

    • acelee1012@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      I have never made a torrent file before so feel free to correct me if it doesn’t work. Here is the magnet link for this as a torrent file so its up for more than an hour magnet:?xt=urn:btih:694535d1e3879e899a53647769f1975276723db7&xt=urn:btmh:12207cf818f0f0110ca5e44614f2c65e016eca2fe7bc569810f9fb25e80ff608fc9b&dn=DOJ%20Epstein%20file%20urls.txt&xl=81991719&tr=udp%3A%2F%2Ftracker.opentrackr.org%3A1337%2Fannounce

  • acelee1012@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    Has anyone made a Dataset 9 and 10 torrent file without the files in it that the NYT reported as potentially CSAM?

    • locke1@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      20 hours ago

      I don’t think anyone knows for sure what files those are. It would’ve been helpful if NYT published the file names. But maybe NYT isn’t sure themselves as they wrote some of the images are “possibly” of teenagers.

      To be on the safe side, I guess you could just remove all nude images from the dataset. It is a ton of images to go through though, hundreds of thousands.

  • activeinvestigator@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    Do people here have the partial dataset 9? or are you all missing the entire set? There is a magnet link floating around for ~100GB of it, the one removed in the OP

    I am trying to figure out exactly how many files dataset 9 is supposed to have in it. Before the zip file went dark, I was able to download about 2GB of it. This was today, maybe not the original zip file from jan 30th In the head of the zip file is an index file, VOL00009.OPT, you don’t need the full download in order to read this index file. The index file says there are 531,307 pdfs the 100GB torrent has 531,256, it’s missing 51 pdfs. I checked the 51 file names and they no longer exist as individual files on the DOJ website either. I’m assuming these are the CSAM.

    note that the 3M number of released documents != 3M pdfs. each pdf page is counted as a “document”. dataset 9 contains 1,223,757 documents, and according to the index, we are missing only 51 documents, they are not multipage. In total, I have 2,731,789 documents from datasets 1-12, short of the 3M number. the index I got also was not missing document ranges

    it’s curious that the zip file had an extra 80GB when only 51 documents are missing. I’m currently scraping links from the DOJ webpage to double check the filenames

    • Arthas@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      i analyzed with AI my 36gb~ that I was able to download before they erased the zip file from the server.

      Complete Volume Analysis
      
        Based on the OPT metadata file, here's what VOL00009 was supposed to contain:
      
        Full Volume Specifications
      
        - Total Bates-numbered pages: 1,223,757 pages
        - Total unique PDF files: 531,307 individual PDFs
        - Bates number range: EFTA00039025 to EFTA01262781
        - Subdirectory structure: IMAGES\0001\ through IMAGES\0532\ (532 folders)
        - Expected size: ~180 GB (based on your download info)
      
        What You Actually Got
      
        - PDF files received: 90,982 files
        - Subdirectories: 91 folders (0001 through ~0091)
        - Current size: 37 GB
        - Percentage received: ~17% of the files (91 out of 532 folders)
      
        The Math
      
        Expected:  531,307 PDF files / 180 GB / 532 folders
        Received:   90,982 PDF files /  37 GB /  91 folders
        Missing:   440,325 PDF files / 143 GB / 441 folders
      
         Insight ─────────────────────────────────────
        You got approximately the first 17% of the volume before the server deleted it. The good news is that the DAT/OPT index files are complete, so you have a full manifest of what should be there. This means:
        - You know exactly which documents are missing (folders 0092-0532)
      

      I haven’t looked into downloading the partials from archive.org yet to see if I have any useful files that archive.org doesn’t have yet from dataset 9.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      thats pretty cool…

      Can you send me a DM of the 51? if i come across one and it isnt some sketchy porn i’ll let u know

  • DigitalForensick@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    While I feel hopeful that we will be able to reconstruct the archive and create some sort of baseline that can be put back out there, I also cant stop thinking about the “and then what” aspect here. We’ve see our elected officials do nothing with this info over and over again and I’m worried this is going to repeat itself.

    I’m fully open to input on this, but I think having a group path forward is useful here. These are the things I believe we can do to move the needle.

    Right Now:

    1. Create a clean Data Archive for each of the known datasets (01-12). Something that is actually organized and accessible.
    2. Create a working Archive Directory containing an “itemized” reference list (SQL DB?) the full Data Archive, with each document’s listed as a row with certain metadata. Imagining a Github repo that we can all contribute to as we work. – File number – Dir. Location – File type (image, legal record, flight log, email, video, etc.) – File Status (Redacted bool, Missing bool, Flagged bool
    3. Infill any MISSING records where possible.
    4. Extract images away from .pdf format, Breakout the “Multi-File” pdfs, renaming images/docs by file number. (I made a quick script that does this reliably well.)
    5. Determine which files were left as CSAM and “redact” them ourselves, removing any liability on our part.

    What’s Next: Once we have the Archive and Archive Directory. We can begin safely and confidently walking through the Directory as a group effort and fill in as many files/blanks as possible.

    1. Identify and dedact all documents with garbage redactions, (remember the copy/paste DOJ blunders from December) & Identify poorly positioned redaction bars to uncover obfuscated names
    2. LABELING! If we could start adding labels to each document in the form of tags that contain individuals, emails, locations, businesses - This would make it MUCH easier for people to “connect the dots”
    3. Event Timeline… This will be hard, but if we can apply a timeline ID to each document, we can put the archive in order of events
    4. Create some method for visualizing the timeline, searching, or making connection with labels.

    We may not be detectives, legislators, or law men, but we are sleuth nerds, and the best thing we can do is get this data in a place that can allow others to push for justice and put an end to this crap once and for all. Its lofty, I know, but enough is enough. …Thoughts?

    • ATroubledMaker@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      22 hours ago

      So I know how to do a lot of this and bring something significant insofar as an understanding of both the gravity and volume of things here. Looking through the way everything and anything that has been released has been organized, well, it’s not. This isn’t how an evidence production should ever look.

      There is a way to best organize this and to do so how it would be expected for the presentation of a catalog of digital evidence. I’m aware of this because I’ve done it for years.

      But almost if not maybe even more important is that while there are monsters still hidden in these documents, whether released or still held back, there is something else to consider.

      Those who are involved and know who the monsters are and can never forget them. Ever.

      I took an interest in this specifically because I felt a moral obligation as someone who has been personally affected in this way just not by these specific monsters. However what I do know is the very structure that allows them to roam free, unscathed, even able to sleep at night. What failed to protect those who were harmed also failed me and when I do sleep it is the nightmare that also can never be forgotten.

      This resulted in learning how to spot their fuck ups because I knew what they were and had no reason to trust that it would fix itself. With that said the insight of someone who understands this through unfortunate lived experience provides something that cannot be learned and something I hope others will never be forced to.

      I have msged a few people. One responded. Just trust me when I say that if you are to work collaboratively, have someone who understands the pain you are just going to be reading.

      I will help where it’s needed and it’s needed.

    • Wild_Cow_5769@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 days ago

      GFD….

      My 2 cents. As a father of only daughters…

      If we don’t weed out this sick behavior as a society we never will.

      My thoughts are enough is enough.

      Once the files are gone there is little to 0 chance they are ever public again….

      You expect me to believe that a “oh shit we messed up” was accident?

      It’s the perfect excuse… so no one looks at the files.

      That’s my 2 cents.

      • DigitalForensick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I’ve been thinking a lot about this whole thing. I don’t want to be worried or fearful here - we have done nothing wrong! Anything we have archived was provided to us directly by them in the first place. There are whispers all over the internet, random torrents being passed around, conspiracies, etc., but what are we actually doing other than freaking ourselves out (myself at least) and going viral with an endless stream of “OMG LOOK AT THIS FILE” videos/posts.

        I vote to remove any of the ‘concerning’ files and backfill with blank placeholder PDFS with justification, then collect everything we have so far, create file hashes, and put out a clean + stable archive on everything we have so far. a safe indexed archive We wipe away any concerns and can proceed methodically through blood trail of documents, resulting in an obvious and accessible collection of evidence. From there we can actually start organizing to create a tool that can be used to crowd source tagging, timestamping, and parsing the data. I’m a developer and am happy to offer my skillset.

        Taking a step back - Its fun to do the “digital sleuth” thing for a while, but then what? We have the files…(mostly)… Great. We all have our own lives, jobs, and families, and taking actual time to dig into this and produce a real solution that can actually make a difference is a pretty big ask. That said, this feels like a moment where we finally can make an actual difference and I think its worth committing to. If any of you are interested in helping beyond archival, please lmk.

        I just downloaded matrix, but I’m new to this, so I’m not sure how that all works. Happy to link up via discord, matrix, email, or whatever.

    • PeoplesElbow@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      We definitely need a crowdsourced method for going through all the files. I am currently building a solo cytoscape tool to try out making an affiliation graph, but expanding this to be a tool for a community, with authorization to just allow whitelisted individuals work on it, that’s beyond my scope and I can’t volunteer to make such an important tool, but I am happy to offer my help building it. I can convert my existing tool to a prototype if anyone wants to collaborate with me on it. I am an amateur, but I will spend all the Cursor Credits on this.

  • Wild_Cow_5769@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    @wild_cow_5769:matrix.org If someone has a group working on finding the dataset.

    There are billions of people on earth. Someone downloaded dataset 9 before the link was taken down. We just have to find them :)

      • Wild_Cow_5769@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 days ago

        This entire thing smells funny. Even OP turned ghost on the threat of suspect images that no one has seen…

        Ask yourself. How did the times or whoever came up with this narrative even find these “suspect” images in a few hours when it seems no one in the world came even download the zip…

        • kutt@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 days ago

          A person made a website just to host links and thumbnails for a better interface to the videos on the DoJ website.

          They deleted everything including their account the same day.

          Everyone. I know website is showing all blank. This is unfortunately the end of my little project. Due to certain circumstances, I had to take it down. Thank you everyone for supporting me and my effort.

          Edit: Link

  • TavernerAqua@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    2 days ago

    In regard to Dataset 9, it’s currently being shared on Dread (forum).

    I have no idea if it’s legit or not, and Idc to find out after reading about what’s in it from NYT.