I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    21 hours ago

    You don’t actually need to “split” anything, you just read from different offsets per thread. Mmap might be the most efficient way to do this (or at least the easiest)

    Whether or not that’s going to run into hardware bottlenecks is a separate issue from designing a parallel algorithm. Idk what OP is trying to accomplish, but if their hardware is known (eg this is an internal tool meant to run in a data center), they’ll need to read up on their hardware and virtualization architecture to squeeze the most IO performance.

    But if parsing is actually the bottleneck, there’s a lot you can do to optimize it in software. Simdjson would be a good place to start.