Settings of models.
GEThttps://api.aleph-alpha.com/model-settings
Only models available to the client will be listed.
Responses
- 200
OK
- application/json
- Schema
- Example (auto)
Schema
- Array [
available
: The model is configured and a matching worker is connected to serve it.unavailable
: The model is configured but no worker has shown recent activity to serve it.luminous
: The model is served by a Luminous worker.vllm
: The model is served by a vLLM worker.translation
: Worker type to serve translation requests.transcription
: Worker type to serve transcription requests.none
: The model has not been trained to support completions. Trying to trigger a completion request will lead to a validation error.full
: The model has been trained to support completions.none
: The model cannot be used for embeddings. The scheduler will reject requests for embeddings to this model.raw
: The model has not explicitly been trained to support embeddings. However, it is possible to retrieve the embedding details technically. This option maps to the/embed
endpoint.semantic
: The model has been trained with a switchable set of weights usable for semantic embedding retrieval. This option maps to the/semantic_embed
endpoint.instructable
: The model has been trained to support any custom instruction for embedding retrieval. This option maps to the/instructable_embed
endpoint.- ]
The current availability status of the model. Currently supported states are:
Possible values: [available
, unavailable
]
The worker type that is used to serve the configured model. The following fields are supported:
Possible values: [luminous
, vllm
, transcription
, translation
]
Feature flag for whether or not multimodal prompts are available to users.
The maximum context size of this model.
True if this model supports semantic embeddings.
The completion type supported by the model.
Possible values: [none
, full
]
The embedding type supported by the model.
This flag replaces semantic_embedding_enabled
and should always be set. If embedding_type
is unset, semantic_embedding_enabled
will take control. If both embedding_type
and semantic_embedding_enabled
are used, implausible combinations are rejected.
Possible values: [none
, raw
, semantic
, instructable
]
Specifies whether the model is aligned s.t. end users can be warned about the model's limitations.
True if this model is supported by the chat endpoint.
A prompt template that can be used for this model.
[
{
"name": "string",
"status": "available",
"worker_type": "luminous",
"description": "string",
"multimodal": true,
"max_context_size": 0,
"semantic_embedding": true,
"completion_type": "none",
"embedding_type": "none",
"aligned": true,
"chat": true,
"prompt_template": "string"
}
]
Authorization: http
name: tokentype: httpscheme: bearerdescription: Can be generated in your [Aleph Alpha profile](https://app.aleph-alpha.com/profile)
- csharp
- curl
- dart
- go
- http
- java
- javascript
- kotlin
- c
- nodejs
- objective-c
- ocaml
- php
- powershell
- python
- r
- ruby
- rust
- shell
- swift
- HTTPCLIENT
- RESTSHARP
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Get, "https://api.aleph-alpha.com/model-settings");
request.Headers.Add("Accept", "application/json");
request.Headers.Add("Authorization", "Bearer <token>");
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());