Skip to main content

Installation Prerequisites

Deployment Options

PhariaAI is a sovereign AI solution that provides you with the flexibility and choice to install in an environment that suits your organisation with no vendor lock-in:

Deployment TypeDescription
On-PremiseWe support installation on-prem that meets the minimum requirements as set out below. For an air gapped system, please discuss this with our customer team.
CloudWe support installation onto any cloud provider that has Kubernetes (managed or self-managed) including:
- StackIT (via Stackit Kubernetes Engine)
- AWS (with EKS)
- Google Cloud (via GKE)
- Azure (via AKS)
HybridHybrid installation is possible with our stack. Please discuss with our solutions engineering team about your specific requirements.
SaaSWe do not currently provide a SaaS offering for PhariaAI. Please discuss with our customer team about managed instance options and our industry vertical SaaS offerings.

Installation process

PhariaAI is deployed onto your chosen environment using Helm.

A single Helm chart will be provided to you via self service software. This provides the definition of the files and dependencies which will be created and packed as an application using Kubernetes resources.

The full installation process can be found here

Prerequisites for Installation

info

The installation process requires familiarity with Kubernetes and Helm

Credentials

A user account with access to Aleph Alpha Artifactory. This will be provided to you.

On your local machine

note

Our documentation is written assuming you will be using Linux or MacOS for your installation but this is not required

AspectRequirements
Container Orchestration PlatformKubernetes client v 1.29 and above
• You can check this using kubectl version
• Check your connectivity using kubectl get nodes
Package ManagerHelm v 3.0 and above
• You can check this using helm version

On your Kubernetes Cluster

AspectCriteriaMinimum Requirements
HardwareGPUQuantity: 3x

Type: NVIDIA Ampere, Lovelace or Hopper generation. Currently, only NVIDIA GPUs are supported. Support for other vendors may be added in the future.

VRAM: Each GPU must have a minimum of 40GB VRAM.

GPU Nodes: Your Kubernetes cluster must include GPU nodes to run the inference stack application pods.

During the finetuning of models, additional GPUs will be required. See Finetuning Service Resource Requirements
CPU & Memory24 CPU cores, 128 GB RAM
Object StorageQuantity: 3x

Type: minio or any other S3 backend type for Pharia Data and Pharia Finetuning

Input & Output Operations (IOPS) maximum: 1000 or above

Throughput maximum: 100 Mb/s or above
Persistent VolumesPersistent volumes accessible by all GPU nodes in the cluster are essential for storing model weights.

Ensure your persistent volumes are configured to be accessible across availability zones if applicable in your environment.
SoftwareNetworkingInstalled in a single namespace with open communication between all services in the namespace
GPU OperatorWe strongly recommend using the NVIDIA GPU Operator v 24 and above on default settings to manage NVIDIA drivers and libraries on your GPU nodes.
Ingress controller & domainThe cluster must include an ingress controller to enable external access to the PhariaAI service.

A certificate manager must also be configured to support secure access via TLS (Transport layer security).

A dedicated domain must be assigned to the Kubernetes cluster, enabling each service to host its application under a subdomain of this domain (e.g., https://<service-name>.<ingress-domain>).
Relational Database ManagementPostgres v 14.0 and above

Quantity: 4x Large
Storage: 100 GB
CPU: 2x
Memory: 4GB

Quantity: 4x Small
Storage: 5 GB
CPU: 2x
Memory: 4GB
Network Access & WhitelistingNot required if networking requirements are met.

If you require multiple name spaces please discuss this with our solutions engineers.
Artifact ManagementAbility to pull the helm chart containing the pharia-ai-helm and container images from an external artifact repository manager, such as Jfrog. Credentials for this will be provided to you.
Monitoring & ObservabilityNo fixed requirements but we can recommend the use of Prometheus & Grafana

Operating Requirements for PhariaAI

Monitoring & Observability

We do not currently provide monitoring and observability as part of the installation, but you are able to connect your own. If you have questions on our recomended best practices using tools such as Prometheus and Grafana for this please discuss this with our solutions engineering team.

Scaling Usage

Depending on how the platform will be used in your organisation you may need to scale the hardware infrastructure to meet your demands.

In particular, custom use cases and applications that you develop will require additional resources as you release these from proof of concept to deployment in production for use by your wider organisation.