Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

Not a model, not a UI, not magic voodoo.

A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don’t GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can’t draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

    • SuspciousCarrot78@lemmy.worldOP
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      8 hours ago

      Parts of this are RAG, sure

      RAG parts:

      • Vault / Mentats is classic retrieval + generation.
      • Vector store = Qdrant
      • Embedding and reranker

      So yes, that layer is RAG with extra steps.

      What’s not RAG -

      KB mode (filesystem SUMM path)

      This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

      If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

      Vodka (facts memory)

      That’s not retrieval at all, in the LLM sense. It’s verbatim key-value recall.

      • JSON on disk
      • Exact store (!!)
      • Exact recall (??)

      Again, no embeddings, no similarity search, no model interpretation.

      “Facts that aren’t RAG”

      In my set up, they land in one of two buckets.

      1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

      2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

      In response to the implicit “why not just RAG then”

      Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

      The extra “steps” are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

      So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don’t trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that’s a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that’s how ASD brains work.

      • ThirdConsul@lemmy.zip
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        10 hours ago

        The system summarizes and hashes docs. The model can only answer from those summaries in that mode

        Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

        • SuspciousCarrot78@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          9 hours ago

          Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

          Huh? That is the literal opposite of what I said. Like, diametrically opposite.

          Let me try this a different way.

          Hallucination in SUMM doesn’t “poison” the KB, because SUMMs are not authoritative facts, they’re derived artifacts with provenance. They’re explicitly marked as model output tied to a specific source hash. Two key mechanics that stop the cascade you’re describing:

          1. SUMM is not a “source of truth”

          The source of truth is still the original document, not the summary. The summary is just a compressed view of it. That’s why it carries a SHA of the original file. If a SUMM looks wrong, you can:

          a) trace it back to the exact document version b) regenerate it c) discard it d) read the original doc yourself and manually curate it.

          Nothing is “silently accepted” as ground truth.

          1. Promotion is manual, not automatic

          The dangerous step would be: model output -> auto-ingest into long-term knowledge.

          That’s explicitly not how this works.

          The Flow is: Attach KB -> SUMM -> human reviews -> Ok, move to Vault -> Mentats runs against that

          Don’t like a SUMM? Don’t push it into the vault. There’s a gate between “model said a thing” and “system treats this as curated knowledge.” That’s you - the human. Don’t GI and it won’t GO.

          Determinism works for you here. The hash doesn’t freeze the hallucination; it freezes the input snapshot. That makes bad summaries:

          • reproducible
          • inspectable
          • fixable

          Which is the opposite of silent drift.

          If SUMM is wrong and you miss it, the system will be consistently wrong in a traceable way, not creatively wrong in a new way every time.

          That’s a much easier class of bug to detect and correct. Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

          And that, is ultimately what keeps the pipeline from becoming “poisoned”.

          • ThirdConsul@lemmy.zip
            link
            fedilink
            arrow-up
            1
            ·
            5 hours ago

            Huh? That is the literal opposite of what I said. Like, diametrically opposite.

            The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.

            No, that’s exactly what you wrote.

            Now, with this change

            SUMM -> human reviews

            That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.

            Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.

            Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

            Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.

        • PolarKraken@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          Woof, after reading your “contributions” here, are you this fucking insufferable IRL or do you keep it behind a keyboard?

          Goddamn. I’m assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.