In this blog post, we will guide you through the process of setting up and running your own private Open Assistant. Open Assistant is a chat-based assistant capable of understanding tasks, interacting with third-party systems, and retrieving information dynamically. It is a genuinely open-source solution, unlike other popular solutions that only include the word “open” in their names.
By following this tutorial, you can harness the power of Open Assistant without relying on third-party inference APIs or exposing your conversations to external entities.
To follow this tutorial, you will need:
Your instance will be created and will appear on the console dashboard. A message will be displayed, and the public IPv4 address will become visible. This process usually takes 1-2 minutes.
Unless stated otherwise, execute all the following steps via SSH on your instance.
Windows users can use Putty (guide available here), while Linux or macOS command line SSH client users can refer to this knowledge base entry.
sudo sed -i 's/nova.clouds./NO./g' /etc/apt/sources.list && sudo apt update
Upgrade all packages (without prompting for confirmation).
sudo apt -o Dpkg::Options::="--force-confold" upgrade --yes
At the time of writing, CUDA 12 was released. As the compatibility of many software packages is not yet a given we install CUDA 11.8 to avoid unexpected issues.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-11-8
sed -i '1i\export PATH="/usr/local/cuda/bin:$PATH"\nexport LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"\n' ~/.bashrc
sudo reboot
We will use the text-generation-webui to interface with the Open Assistant model.
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
# The next command installs Miniconda in batch mode. It expects that you agree with its license terms!
bash Miniconda3.sh -b -u -p $HOME/miniconda3
~/miniconda3/bin/conda init $(basename $SHELL)
sudo apt -y install build-essential
conda create --yes -n textgen python=3.10.10
conda activate textgen
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
As outlined in the README of the text-generation-webui we have to place the models in the aptly named models Luckily, this is mostly automated. Execute the following command in the current directory to take care of it:
python3 download-model.py OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5
This will download the ~23 GB open/free model data from the Hugging Face servers. There are other variants with questionable legality floating around, use those on your own risk.Your Genesis Cloud instance can (by default) download with up to 1Gbit/s so you can expect this step to take 4-5 minutes depending on the load and connectivity of the servers.
Now that we have the model in place, we can start the web UI:
python3 server.py --gpu-memory 22 --share
If you use a GPU other than a RTX3090, you need to adopt the --gpu-memory parameter. The same is true if you want to use multiple GPUs. Running python3 server.py -h will provide more details and examples. Suppose you disconnected your SSH session to set up the forwarding. In that case, you need to re-activate the conda environment, switch to the text-generation-webui directory, and start the server again:
conda activate textgen
cd text-generation-webui
python3 server.py --chat --model OpenAssistant_oasst-sft-4-pythia-12b-epoch-3.5 --gpu-memory 22 --share
# Give it a few seconds to load the model and start-up
You can now access the web UI at the displayed URL (https://….gradio.live) 🎉
We recommend to not rely on the gradio proxy service to access the service but accessing it in another way.As there are many ways to skin this cat (SSH port forwarding, local proxy with TLS termination, using (free) Cloudflare fronting, …) it should be out of scope for this article (though not relying on the public gradio proxy service makes it much more responsive).
Now that everything is up and running we want to use the WebUi.Is it as easy as it gets:
If you only get truncated responses, check your console output for OutOfMemoryError messages. You can work around those by using an instance with multiple (e.g., 2x RTX 3090) GPUs. If you use multiple GPU make sure to adopt the --gpu-memory parameter appropriately by noting the amount of vmem that should be allocated separated by spaces. E.g., --gpu-memory 23 23 for 2x RTX 3090).
The Genesis Cloud team
Never miss out again on Genesis Cloud news and our special deals: follow us on Twitter, LinkedIn, or Reddit.
Sign up for an account with Genesis Cloud here. If you want to find out more, please write to contact@genesiscloud.com.