• FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    58 minutes ago

    We have a gigantic monorepo at work.

    To manage the complexity we have entire teams dedicated to aspects of it, and an insanely complex build system that makes use of remote builders and caches. A change to a single python file requires about fifteen seconds of the build system determining it needs to do no work, with all of this caching, and the cache can be invalidated unexpectedly and take twenty minutes instead. Ordinary language features in ides are routinely broken, I assume because of the difficulty of maintaining an index of that stuff on such a huge codebase. Ordinary tools like grep -R or find can only be used with care.

  • Serdalis@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    2 hours ago

    They are simpler, but they do not scale. Eventually its better to create an internal package repo to share common code, this allows rolling updates a lot easier than a monorepo does.

    Smaller repos are also less stressful for monitoring and deployment tooling and makes granular reporting easier which you will eventually have to do in large projects.

    Simple for small code bases, a pain and a big code smell for large ones.

    • majster@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 hour ago

      Agree with this explanation. Also in a monorepo it’s much easier to reference code between modules and I think this leads to too much coupled code. It takes more discipline to limit the scope of modules.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    ·
    59 minutes ago

    The problem is PRs / CI tooling. They treat a repo as an independent project, but…:

    • A change needs to be able to span multiple repos
    • …or you need a way of expressing dependencies between changes in multiple repos
    • …and you need to be able to block changes to dependencies that should be non-breaking but aren’t.

    Zuul CI solved a lot of these kind of problems for the Openstack project but I’ve personally found it a bitch to setup. Lots of good ideas in it though.

  • Ephera@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    The thing to me is always that, yeah, you need a huge commit for a breaking change in an internal library inside a monorepo, but you will still need to do the same work in a polyrepo eventually, too.

    Especially since “eventually” really means “ASAP” here. Without going through the breaking change, you can’t benefit from non-breaking changes either and the complexity of your codebase increases the longer you defer the upgrade, because different parts of your application have different behavior then. So, even in a polyrepo, you ideally upgrade all library consumers right away, like you’re forced to in a monorepo.