Accessing the models
We provide access to our models through two channels: by on-premise installation and on Hugging Face.
In this article:
On-premise installation
Our customers are supplied with our full LLM stack, including model weights and inference runtime. Contact us for options to deploy Pharia-1-LLM-7B models in any cloud or on-premise environment. We provide our customers with open access to our full model checkpoint including weights and code for commercial use.