• P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    ComfyUI is just a bunch of Python code tied into I/O nodes. I’m surprised there isn’t a good set of nodes for SGLang yet.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      SGLang is partially a scripting language for prompt building leveraging its caching/logprobs output, for doing stuff like filling in fields or branching choices, so it’s probably best done in that. It also requires pretty beefy hardware for the model size (as opposed to backends like exllama or llama.cpp that focus more on tight quantization and unbatched performance), so I suppose theres not a lot of interest from more local tinkerers?

      It would be cool, I guess, but ComfyUI does feel more geared for diffusion. Image/video generation is more multimodel and benefits from dynamically loading/unloading/swapping all sorts of little submodels, loras and masks, applying them, piping them into each other and such.

      LLM running is more monolithic: you have the 1 big model, maybe a text embeddings model as part of the same server, and everything else is just processing strings to build the prompts which one does linearly with python or whatever. Stuff like CFG and purpose-build loras do exist, but aren’t used much.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        It’s a shame, because ComfyUI can be so much more than just image generation. And just because there’s a lot of string processing for LLMs doesn’t mean that it isn’t important to capture in an I/O interface, especially when it comes to preserving chat history. Save data, load data, ask new questions, etc.

        ChatGPT is pretty damn powerful, I’ll admit. But, all of its components need to be localized, especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 days ago

          , especially since something like a Mixture of Experts model could be split down to base models and loaded/unloaded as necessary.

          It doesn’t work that way. All MoE experts are ‘interleaved’ and you need all of them loaded at once, for every token. Some API servers can hotswap whole models, but its not fast, and rarely done since LLMs are pretty ‘generalized’ and tend to serve requests in parallel on API servers.

          The closest to what you’re thinking of is LoRAX (which basically hot-swaps Loras efficiently). But it needs an extremely specialized runtime derived from its associated paper, hence people tend to not use it since it doesn’t support quantization and some other features as well: https://github.com/predibase/lorax

          There is a good case for pure data processing, yeah… But it has little integration with LLMs themselves, especially with the API servers generally handling tokenizers/prompt formatting.

          But, all of its components need to be localized

          They already are! Local LLM tooling and engines are great and super powerful compared to ChatGPT (which offers no caching, no raw completion, primitive sampling, hidden thinking, and so on).