Skip to main content




Better understand the source of a completion, specifically on how much each section of a prompt impacts each token of the completion.


Query Parameters

    nice boolean

    Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones.


    model stringrequired

    Name of the model to use.

    hosting stringnullable

    Possible values: [aleph-alpha]

    Determines in which datacenters the request may be processed. You can either set the parameter to "aleph-alpha" or omit it (defaulting to None).

    Not setting this value, or setting it to None, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximal availability.

    Setting it to "aleph-alpha" allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.

    prompt object required

    This field is used to send prompts to the model. A prompt can either be a text prompt or a multimodal prompt. A text prompt is a string of text. A multimodal prompt is an array of prompt items. It can be a combination of text, images, and token ID arrays.

    In the case of a multimodal prompt, the prompt items will be concatenated and a single prompt will be used for the model.


    • Token ID arrays are used as as-is.
    • Text prompt items are tokenized using the tokenizers specific to the model.
    • Each image is converted into 144 tokens.


    target stringnullablerequired

    The completion string to be explained based on model probabilities.

    control_factor number

    Default value: 0.1

    Factor to apply to the given token in the attention matrix.

    • 0 <= factor < 1 => Suppress the given token
    • factor == 1 => identity operation, no change to attention
    • factor > 1 => Amplify the given token
    contextual_control_threshold numbernullable

    If set to null, attention control parameters only apply to those tokens that have explicitly been set in the request. If set to a non-null value, we apply the control parameters to similar tokens as well. Controls that have been applied to one token will then be applied to all other tokens that have at least the similarity score defined by this parameter. The similarity score is the cosine similarity of token embeddings.

    control_log_additive boolean

    Default value: true

    true: apply controls on prompt items by adding the log(control_factor) to attention scores. false: apply controls on prompt items by (attention_scores - -attention_scores.min(-1)) * control_factor

    postprocessing string

    Possible values: [none, absolute, square]

    Default value: none

    Optionally apply postprocessing to the difference in cross entropy scores for each token. "none": Apply no postprocessing. "absolute": Return the absolute value of each value. "square": Square each value

    normalize boolean

    Default value: false

    Return normalized scores. Minimum score becomes 0 and maximum score becomes 1. Applied after any postprocessing

    prompt_granularity object
    type string

    Possible values: [token, word, sentence, paragraph, custom]

    At which granularity should the target be explained in terms of the prompt. If you choose, for example, "sentence" then we report the importance score of each sentence in the prompt towards generating the target output.

    If you do not choose a granularity then we will try to find the granularity that brings you closest to around 30 explanations. For large documents, this would likely be sentences. For short prompts this might be individual words or even tokens.

    If you choose a custom granularity then you must provide a custom delimiter. We then split your prompt by that delimiter. This might be helpful if you are using few-shot prompts that contain stop sequences.

    For image prompt items, the granularities determine into how many tiles we divide the image for the explanation. "token" -> 12x12 "word" -> 6x6 "sentence" -> 3x3 "paragraph" -> 1

    delimiter string

    A delimiter string to split the prompt on if "custom" granularity is chosen.

    target_granularity string

    Possible values: [complete, token]

    Default value: complete

    How many explanations should be returned in the output.

    "complete" -> Return one explanation for the entire target. Helpful in many cases to determine which parts of the prompt contribute overall to the given completion. "token" -> Return one explanation for each token in the target.

    control_token_overlap string

    Possible values: [partial, complete]

    Default value: partial

    What to do if a control partially overlaps with a text or image token.

    If set to "partial", the factor will be adjusted proportionally with the amount of the token it overlaps. So a factor of 2.0 of a control that only covers 2 of 4 token characters, would be adjusted to 1.5. (It always moves closer to 1, since 1 is an identity operation for control factors.)

    If set to "complete", the full factor will be applied as long as the control overlaps with the token at all.



    model_version string
    explanations object[]

    This array will contain one explanation object for each token in the target string.

  • Array [
  • target string

    The string representation of the target token which is being explained

    items object[]

    Contains one item for each prompt item (in order), and the last item refers to the target.

  • Array [
  • oneOf
    type string

    Possible values: [token_ids]

    scores number[]
  • ]
  • ]