Skip to main content

13 docs tagged with "inference"

View all tags

Announcing new unified worker configuration file format

With worker version api-worker-luminous:2024-08-15-0cdc0 of our inference stack worker, we introduce a new unified and versioned configuration format for our workers. Instead of 2 configuration files the worker can now be configured with a single configuration file.

Announcing release of Pharia embedding model

We are happy to bring to you our new Pharia Embedding model (Pharia-1-Embedding-4608-control) that builds on our latest Pharia LLM. The model is trained with adapters on top of (frozen) Pharia LLM weights and thus can be served on the same worker for both completion requests and embedding requests (see figure below). You can read more about the training details and evaluations of the embedding model in our model card.

Improvements in AtMan speed

With version api-worker-luminous:2024-10-30-094b5 of our luminous inference workers, we've improved the speed of inference when running with our Attention Manipulation mechanism.

Introducing chat endpoint in Aleph Alpha inference stack

With version api-scheduler:2024-07-25-0b303 of our inference stack API-scheduler, we now support a /chat/completions endpoint. This endpoint can be used to prompt a chat-capable LLM with a conversation history and a prompt to generate a continuation of the conversation. The endpoint is available for all models that support the chat capability. The endpoint is compatible with OpenAI's /chat/completions endpoint.

Verify your on-premise installation and measure its performance

To check that your installation works, we provide a script that uses the Aleph Alpha Python client to check if your system has been configured correctly. This script will report which models are currently available and provide some basic performance measurements for those models.