Get All Model Cards
GEThttps://api.pharia.example.com/v1/studio/models
Get all ModelCard
from the list of models that are available.
Responses
- 200
Successful Response
- application/json
- Schema
- Example (auto)
Schema
- Array [
- ]
The name of the model
The current availability status of the model. Currently supported states are 'available' and 'unavailable'
Possible values: [available
, unavailable
]
description objectrequired
The maximum number of tokens the model can process in a single input
Whether the model can generate semantic embeddings
Type of worker that serves the model. Can either be 'luminous' or 'vllm'. If 'luminous' is set, the model supports advanced completion parameters. If called with these parameters and it is not supported, it will raise an error
Whether the model can process multiple types of input data (e.g., text, images)
Whether the model is supported by the chat endpoint
The completion type supported by the model. It states if the model has not been trained to support completions.
Possible values: [full
, none
]
The prompt template that should be used to prompt the model
category object
The link to the model card
maximum_completion_tokens object
[
{
"name": "string",
"status": "available",
"description": "string",
"max_context_size": 0,
"aligned": true,
"semantic_embedding": true,
"worker_type": "string",
"multimodal": true,
"chat": true,
"completion_type": "full",
"prompt_template": "string",
"category": "string",
"link": "string",
"maximum_completion_tokens": 0
}
]
- csharp
- curl
- dart
- go
- http
- java
- javascript
- kotlin
- c
- nodejs
- objective-c
- ocaml
- php
- powershell
- python
- r
- ruby
- rust
- shell
- swift
- HTTPCLIENT
- RESTSHARP
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Get, "https://api.pharia.example.com/v1/studio/models");
request.Headers.Add("Accept", "application/json");
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());