I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.
Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)
The best/easiest way to get started with a self-hosted LLM is to check out this repo:
https://github.com/oobabooga/text-generation-webui
Its goal is to be the Automatic1111 of text generators, and it does a fair job at it.
A good model that’s said to rival gpt-3.5 is the new Falcon model. The full sized version is too big to run on a single GPU, but the 7b version “only” needs about 16GB.
https://huggingface.co/tiiuae/falcon-7b
There’s also the Wizard-uncensored model that is popular.
https://huggingface.co/ehartford/Wizard-Vicuna-13B-Uncensored
There are a ton of models out there with new ones popping up every day. You just need to search around. The oobabooga repo has a few models linked in the readme also.
Edit: there’s also h20gpt, which seems really promising. I’m going to try it out in the next couple days.
Note that when using llama-derived models, such as vicuna, you are bound by their license to only use them for “research” purposes.
If you want an unrestricted version, go for open-llama or RedPajama.
Falcon is less restrictive and only wants a cut of profits if they exceed 1 million dollars, but I’d wager that fully unrestricted is the way to go.
Falcon has switched to Apache 2.0 and removed the commercial limit.
Sorry, I must’ve missed that somehow, then my comment only applies to llama and its direct derivates.
How do you know how much ram the model needs?
The model creator usually mentioned it in the readme:
You will need at least 16GB of memory to swiftly run inference with Falcon-7B.
Usually the models support CPU inference. Tremendously slow but works in a pinch.
There’s an average correlation between the models parameters and the execution precision being used (eg. 7b parameters at f16 precision). And then using optimized execution for 8 bit or even 4 bit will reduce memory usage and increase execution time.
It’s entirely dependent on the model, the framework, the hardware (CPU vs GPU).
Generally there should be some indication somewhere in the model’s repo that states what you need.
If you want extremely low code, I recommend GPT4All. The prebuilt binaries/exes run locally on CPU and give you a choice of model to use so you can try out a couple to see which you like the best. It’s remarkably quick on my Ryzen 7 3700X, and it doesn’t take long to get a little web server running with Langchain if you want to put in a bit more effort, too.
Do you need some particular python stuff or is it all provided ?
You can absolutely self host LLMs. HELM team has done an excellent job benchmarking the efficiency of different models for specific tasks so that would be a good place to start. You can balance model performance for your specific task with the model’s efficiency - in most situations, larger models are better performing but use more GPUs or are only available via APIs.
There are currently 3 different approaches to use AI for a custom task and application -
-
Train a base LLM from scratch - this is like creating your own GPT-by_autopilot model. This would be the maximum level of control, however the amount of compute, time, and data required for training does not make this an ideal approach for the end user. There are many open source base LLMs already published on HuggingFace that can be used instead.
-
Fine-tune a base LLM - starting with a base LLM, it can be fine tuned for a certain set of tasks. For example, you can fine tune a model to follow instructions or use as a chatbot. InstructGPT and GPT3.5+ are examples of fine tuned models. This approach allows you to create a model that can understand a specific domain or a set of instructions particularly well as compared to the base LLM. However, any time that training a large model is needed, it will be an expensive approach. If you are starting out, I’ll suggest exploring this as a v2 step for improving your model.
-
Prompt engineering or indexing using an existing LLM - starting with an existing model, create prompts to achieve your objective. This approach gives you the least control over the model itself, but is the most efficient. I would suggest this as the first approach to try. Langchain is the most widely used tool for prompt engineering and supports using self hosted base- or instruct-LLM. If your task is search and retrieval, an embeddings model is used. In this scenario, you generate embeddings for all your content and store the embeddings as vectors. For a user query, you then convert it to an embedding using the same model, and finally retrieve the most similar content based on vector similarity. Langchain provides this capability, but IMO, sentence-transformers may be a better starting point for a self hosted retrieval application. Without any intention to hijack this post, you can check out my project - synology-photos-nlp-search - as an example of a self hosted retrieval application.
To learn more, I have found the recent deeplearning.ai short courses to be quite good - they are short, comprehensive, and free.
-
I’m about to start this journey myself. I found this, which looks promising: https://github.com/ggerganov/llama.cpp
Would be nice if someone here with some experience could share.
Edit: also this https://gpt4all.io/index.html
I think I set that up successfully on a vm under windows.
It’s obviously a level worse than chatgpt but it worked surprisingly well otherwise. Poorer answers but still not bad.
If you don’t have a good GPU then just use gpt4all
I personally use
llama.cpp
in a VM, however if you have a nvidia GPU with lots of VRAM you’ve got more options available, as well as much faster inference (text generation) speed.Check out the community at !localllama@sh.itjust.works, they’re pretty experienced with running LLMs locally
why nvida sprecifically?
At the moment most LLM libraries use CUDA for acceleration, which is a hardware feature on nvidia GPUs
I believe
llama.cpp
can make use of AMD GPUs, but double check the project’s GitHub discussions first to confirm this, and see how people set it up
I would advise not training your own model but instead use tools like langchain and chroma, in combination with a open model like gpt4all or falcon :).
So in general explore langchain!
Repos you might want to (git) checkout:
Not sure if youre asking about already trained models or you want to train yours.
If you just want to have fun the small to medium models are pretty ok. Things like Wizard Vicuna 13b or the smaller 7b. You just have to try some of them until you find ehats best for your use case. Ex I have a model running discord bots (with different personalities) but the same model would work badly with my other projects. Esp considering that with some models you can just chat while others need instructions.
There are also recent models that approach gpt levels. Downside is they are huge in terms of hardware cost (hundreds of gbs of ram, multiple gpus). But they wont necesarly be better than a small more focused model.
Get oobabooga (the automatic1111 of chat llms) and then search for TheBloke on huggingface for models.
If you want to host a text model thats is reachable by you or anyone securely over the internet, I suggest you turn your pc into a worker for the ai horde. You would then be able to access the model you’re serving from everywhere but also everyone else’s llm and stable diffusion models with priority. You would also be improving the commons
I can vouch for the horde, it’s addicting to watch your little point counter go up after you’ve put something out there and seeing people use something you are hosting.
It’s awesome to put a computer out onto the internet and have real life people getting real benefit within minutes. This is a way you can do it, and there’s so much demand that you are helping people by putting your machine out there.
However, I will give you a fair warning, it will be used for porn. Not entirely, but it will happen.
Not entirely, but almost exclusively :D
However you can make the worker SFW if you prefer
You might find some starting points or even projects or terms to look for in this article:
This project might not be exactly what you’re looking for due to the limited amount of prebuilt models, but this is an interesting project nonetheless. It seems to run on a variety of hardware (even smartphones), however, you’ll need to compile your own models if there isn’t a prebuilt model available. Luckily at least Vicuna is included as a prebuilt model. There’s another model included called RWKV-Raven which is actually an RNN instead of a transformer that approaches its level of performance. Seems pretty interesting.
The openai cookbook, while mostly focused on openai llms, provides lots of useful information about how to improve result reliability by tweaking your prompt and a lot more such as code samples: https://github.com/openai/openai-cookbook
About langchain, I’ll go a bit against the flow and would suggest against it if you want to actually understand what is happening. It provides too much abstraction that hides the prompts and prevents you to easily adapt it’s behavior. This discussion on hackernews talks more about it: https://news.ycombinator.com/item?id=36645575 Having recently dived into this topic and having been bitten by langchain shortcomings, I cannot but agree with the comments.
I tried quay.io/go-skynet/local-ai but my Server lacks the Cpu instruction set for it.