Accessing the models

We provide access to our models through two channels: by on-premise installation and on Hugging Face.


On-premise installation

Our customers are supplied with our full LLM stack, including model weights and inference runtime. Contact us for options to deploy Pharia-1-LLM-7B models in any cloud or on-premise environment. We provide our customers with open access to our full model checkpoint including weights and code for commercial use.

Hugging Face

The model’s weights are available on Hugging Face under the Open Aleph License, which limits the usage to educational and research purposes.