Announcing constrained decoding to ensure JSON format
Overview
Overview
We are happy to bring to you our new Pharia Embedding model (Pharia-1-Embedding-4608-control) that builds on our latest Pharia LLM. The model is trained with adapters on top of (frozen) Pharia LLM weights and thus can be served on the same worker for both completion requests and embedding requests (see figure below). You can read more about the training details and evaluations of the embedding model in our model card.
Meta has recently released their version 3.1 of the Llama family of language models.
Today we are happy to announce the support of more open-source models in the Aleph-Alpha stack
In version api-scheduler:2024-10-01-00535 of our inference stack API-scheduler, we added a new stream property to the /complete endpoint to enable streamed token generation.
With version api-scheduler:2024-07-25-0b303 of our inference stack API-scheduler, we now support a /chat/completions endpoint. This endpoint can be used to prompt a chat-capable LLM with a conversation history and a prompt to generate a continuation of the conversation. The endpoint is available for all models that support the chat capability. The endpoint is compatible with OpenAI's /chat/completions endpoint.