Q&A
POST/qa
This endpoint has been deprecated and may be replaced or removed in future versions of the API.
Will answer a question about text given in a prompt. This interface is deprecated and will be removed in a later version. New methodologies for processing Q&A tasks will be provided before this is removed.
Request
Query Parameters
Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones.
- application/json
Body
required
- Docx: A base64 encoded Docx file
- Text: A string of text
- Prompt: A multimodal prompt, as is used in our other tasks like Completion
- Array [
- Docx
- Text
- Prompt
- Array [
- Text
- Image
- Token Ids
- Array [
- 0 <= factor < 1 => Supress the given token
- factor == 1 => identity operation, no change to attention
- factor > 1 => Amplify the given token
- ]
- Array [
- 0 <= factor < 1 => Supress the given token
- factor == 1 => identity operation, no change to attention
- factor > 1 => Amplify the given token
- ]
- Array [
- 0 <= factor < 1 => Supress the given token
- factor == 1 => identity operation, no change to attention
- factor > 1 => Amplify the given token
- ]
- ]
- ]
Possible values: [aleph-alpha
, null
]
Optional paramter that specifies which datacenters may process the request.
You can either set the parameter to "aleph-alpha" or omit it (defaulting to null
).
Not setting this value, or setting it to null
, gives us maximal flexibility in processing your request in our
own datacenters and on servers hosted with other providers. Choose this option for maximum availability.
Setting it to "aleph-alpha" allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.
The question to be answered about the prompt by the model. The prompt may not contain a valid answer.
documents object[]required
A list of documents. Valid document formats for tasks like Q&A and Summarization.
These can be one of the following formats:
Docx and Text documents are usually preferred and have optimisations (such as chunking) applied to make them work better with the task being performed.
Prompt documents are assumed to be used for advanced use cases, and will be left as-is.
prompt object[]
An array of prompt items for multimodal request. Can support any combination of text, images, and token ids.
Possible values: [text
]
controls object[]
Starting character index to apply the factor to.
The amount of characters to apply the factor to.
Factor to apply to the given token in the attention matrix.
Possible values: [partial
, complete
]
Default value: partial
What to do if a control partially overlaps with a text token.
If set to "partial", the factor will be adjusted proportionally with the amount of the token it overlaps. So a factor of 2.0 of a control that only covers 2 of 4 token characters, would be adjusted to 1.5. (It always moves closer to 1, since 1 is an identiy operation for control factors.)
If set to "complete", the full factor will be applied as long as the control overlaps with the token at all.
Possible values: [image
]
An image send as part of a prompt to a model. The image is represented as base64.
Note: The models operate on square images. All non-square images are center-cropped before going to the model, so portions of the image may not be visible.
You can supply specific cropping parameters if you like, to choose a different area of the image than a center-crop. Or, you can always transform the image yourself to a square before sending it.
x-coordinate of top left corner of cropping box in pixels
y-coordinate of top left corner of cropping box in pixels
Size of the cropping square in pixels
controls object[]
rect objectrequired
Bounding box in logical coordinates. From 0 to 1. With (0,0) being the upper left corner, and relative to the entire image.
Keep in mind, non-square images are center-cropped by default before going to the model. (You can specify a custom cropping if you want.). Since control coordinates are relative to the entire image, all or a portion of your control may be outside the "model visible area".
x-coordinate of top left corner of the control bounding box. Must be a value between 0 and 1, where 0 is the left corner and 1 is the right corner.
y-coordinate of top left corner of the control bounding box Must be a value between 0 and 1, where 0 is the top pixel row and 1 is the bottom row.
width of the control bounding box Must be a value between 0 and 1, where 1 means the full width of the image.
height of the control bounding box Must be a value between 0 and 1, where 1 means the full height of the image.
Factor to apply to the given token in the attention matrix.
Possible values: [partial
, complete
]
Default value: partial
What to do if a control partially overlaps with an image token.
If set to "partial", the factor will be adjusted proportionally with the amount of the token it overlaps. So a factor of 2.0 of a control that only covers half of the image "tile", would be adjusted to 1.5. (It always moves closer to 1, since 1 is an identiy operation for control factors.)
If set to "complete", the full factor will be applied as long as the control overlaps with the token at all.
Possible values: [token_ids
]
controls object[]
Index of the token, relative to the list of tokens IDs in the current prompt item.
Factor to apply to the given token in the attention matrix.
Possible values: >= 1
and <= 200
Default value: 30
The maximum number of answers to return for this query. A smaller number of max answers can possibly return answers sooner, since less answers have to be generated.
Responses
- 200
OK
- application/json
- Schema
- Example (from schema)
Schema
- Array [
- ]
model name and version (if any) of the used model for inference
answers object[]
list of answers. One answer per chunk.
The answer generated by the model for a given chunk.
quality score of the answer
The evidence from the source document for the given answer.
{
"answers": [
{
"answer": "Andreas",
"score": 0.9980973,
"evidence": "Andreas likes Pizza."
}
],
"model_version": "2021-12"
}