Embeddings
POSThttps://api.aleph-alpha.com/embed
Embeds a text using a specific model. Resulting vectors that can be used for downstream tasks (e.g. semantic similarity) and models (e.g. classifiers). To obtain a valid model, use GET
/models_available
.
Request
Query Parameters
Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones.
- application/json
Body
required
Name of model to use. A model name refers to a model architecture (number of parameters among others). Always the latest version of model is used. The model output contains information as to the model version.
Possible values: [aleph-alpha
, null
]
Optional paramter that specifies which datacenters may process the request.
You can either set the parameter to "aleph-alpha" or omit it (defaulting to null
).
Not setting this value, or setting it to null
, gives us maximal flexibility in processing your request in our
own datacenters and on servers hosted with other providers. Choose this option for maximum availability.
Setting it to "aleph-alpha" allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.
prompt object required
A list of layer indices from which to return embeddings.
- Index 0 corresponds to the word embeddings used as input to the first transformer layer
- Index 1 corresponds to the hidden state as output by the first transformer layer, index 2 to the output of the second layer etc.
- Index -1 corresponds to the last transformer layer (not the language modelling head), index -2 to the second last
Flag indicating whether the tokenized prompt is to be returned (True) or not (False)
Pooling operation to use. Pooling operations include:
- mean: Aggregate token embeddings across the sequence dimension using an average.
- weighted_mean: Position weighted mean across sequence dimension with latter tokens having a higher weight.
- max: Aggregate token embeddings across the sequence dimension using a maximum.
- last_token: Use the last token.
- abs_max: Aggregate token embeddings across the sequence dimension using a maximum of absolute values.
Explictly set embedding type to be passed to the model. This parameter was created to allow for semantic_embed embeddings and will be deprecated. Please use the semantic_embed-endpoint instead.
Default value: false
Return normalized embeddings. This can be used to save on additional compute when applying a cosine similarity metric.
If set to null
, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-null value, we apply the control parameters to similar tokens as well.
Controls that have been applied to one token will then be applied to all other tokens
that have at least the similarity score defined by this parameter.
The similarity score is the cosine similarity of token embeddings.
Default value: true
true
: apply controls on prompt items by adding the log(control_factor)
to attention scores.
false
: apply controls on prompt items by (attention_scores - -attention_scores.min(-1)) * control_factor
Responses
- 200
OK
- application/json
- Schema
- Example (from schema)
Schema
model name and version (if any) of the used model for inference
embeddings: - pooling: a dict with layer names as keys and and pooling output as values. A pooling output is a dict with pooling operation as key and a pooled embedding (list of floats) as values
{
"model_version": "2021-12",
"embeddings": {
"layer_0": {
"max": [
-0.053497314,
0.0053749084,
0.06427002,
0.05316162,
-0.0044059753,
"..."
]
},
"layer_1": {
"max": [
0.14086914,
-0.24780273,
1.3232422,
-0.07055664,
1.2148438,
"..."
]
}
},
"tokens": null
}
- curl
- python
- go
- nodejs
- ruby
- csharp
- php
- java
- powershell
- CURL
curl -L -X POST 'https://api.aleph-alpha.com/embed' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Bearer <TOKEN>' \
--data-raw '{
"model": "luminous-base",
"prompt": "An apple a day keeps the doctor away.",
"layers": [
0,
1
],
"tokens": false,
"pooling": [
"max"
],
"type": "default"
}'