Skip to main content

Batched Semantic Embeddings

POST 

https://api.aleph-alpha.com/batch_semantic_embed

Embeds multiple prompts using a specific model and semantic embedding method. Resulting vectors that can be used for downstream tasks (e.g. semantic similarity) and models (e.g. classifiers). To obtain a valid model, use GET /model-settings.

Request

Query Parameters

    nice boolean

    Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones.

Body

required
    model string

    Name of the model to use. A model name refers to a model's architecture (number of parameters among others). The most recent version of the model is always used. The model output contains information as to the model version. To find out which models support semantic embeddings, please refer to the /model-settings endpoint.

    hosting Hostingnullable

    Possible values: [aleph-alpha, null]

    Optional parameter that specifies which datacenters may process the request. You can either set the parameter to "aleph-alpha" or omit it (defaulting to null).

    Not setting this value, or setting it to null, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximum availability.

    Setting it to "aleph-alpha" allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.

    prompts object[]required
  • Array [
  • oneOf

    string

  • ]
  • representation stringrequired

    Possible values: [symmetric, document, query]

    Type of embedding representation to embed the prompt with.

    "symmetric": Symmetric embeddings assume that the text to be compared is interchangeable. Usage examples for symmetric embeddings are clustering, classification, anomaly detection or visualisation tasks. "symmetric" embeddings should be compared with other "symmetric" embeddings.

    "document" and "query": Asymmetric embeddings assume that there is a difference between queries and documents. They are used together in use cases such as search where you want to compare shorter queries against larger documents.

    "query"-embeddings are optimized for shorter texts, such as questions or keywords.

    "document"-embeddings are optimized for larger pieces of text to compare queries against.

    compress_to_size SemanticEmbeddingCompressToSizenullable

    Possible values: [128]

    The default behavior is to return the full embedding with 5120 dimensions. With this parameter you can compress the returned embedding to 128 dimensions. The compression is expected to result in a small drop in accuracy performance (4-6%), with the benefit of being much smaller, which makes comparing these embeddings much faster for use cases where speed is critical. With the compressed embedding can also perform better if you are embedding really short texts or documents.

    normalize boolean

    Default value: false

    Return normalized embeddings. This can be used to save on additional compute when applying a cosine similarity metric.

    contextual_control_threshold numbernullable

    If set to null, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-null value, we apply the control parameters to similar tokens as well. Controls that have been applied to one token will then be applied to all other tokens that have at least the similarity score defined by this parameter. The similarity score is the cosine similarity of token embeddings.

    control_log_additive boolean

    Default value: true

    true: apply controls on prompt items by adding the log(control_factor) to attention scores. false: apply controls on prompt items by (attention_scores - -attention_scores.min(-1)) * control_factor

Responses

OK

Schema
    model_version string

    model name and version (if any) of the used model for inference

    embeddings array[]
    num_tokens_prompt_total integer

    Number of tokens in the all prompts combined.

    Tokenization:

    • Token ID arrays are used as as-is.
    • Text prompt items are tokenized using the tokenizers specific to the model.
    • Each image is converted into a fixed amount of tokens that depends on the chosen model.
curl -L -X POST 'https://api.aleph-alpha.com/batch_semantic_embed' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <TOKEN>' \
--data-raw '{
"model": "luminous-base",
"prompt": "An apple a day keeps the doctor away.",
"representation": "symmetric",
"compress_to_size": 128
}'
Request Collapse all
Base URL
https://api.aleph-alpha.com
Auth
Parameters
— query
Body required
{
  "model": "luminous-base",
  "prompt": "An apple a day keeps the doctor away.",
  "representation": "symmetric",
  "compress_to_size": 128
}
ResponseClear

Click the Send API Request button above and see the response here!