Skip to main content

Q&A

POST 

/qa

deprecated

This endpoint has been deprecated and may be replaced or removed in future versions of the API.

Will answer a question about text given in a prompt. This interface is deprecated and will be removed in a later version. New methodologies for processing Q&A tasks will be provided before this is removed.

Request

Query Parameters

    nice boolean

    Setting this to True, will signal to the API that you intend to be nice to other users by de-prioritizing your request below concurrent ones.

Bodyrequired

    hostingHosting (string)nullable

    Optional parameter that specifies which datacenters may process the request. You can either set the parameter to "aleph-alpha" or omit it (defaulting to null).

    Not setting this value, or setting it to null, gives us maximal flexibility in processing your request in our own datacenters and on servers hosted with other providers. Choose this option for maximum availability.

    Setting it to "aleph-alpha" allows us to only process the request in our own datacenters. Choose this option for maximal data privacy.

    Possible values: [aleph-alpha, null]

    querystringrequired

    The question to be answered about the prompt by the model. The prompt may not contain a valid answer.

    documents object[]required

    A list of documents. Valid document formats for tasks like Q&A and Summarization.

    These can be one of the following formats:

    • Docx: A base64 encoded Docx file
    • Text: A string of text
    • Prompt: A multimodal prompt, as is used in our other tasks like Completion

    Docx and Text documents are usually preferred and have optimisations (such as chunking) applied to make them work better with the task being performed.

    Prompt documents are assumed to be used for advanced use cases, and will be left as-is.

  • Array [
  • oneOf
    docxbase64
  • ]
  • max_answersinteger

    The maximum number of answers to return for this query. A smaller number of max answers can possibly return answers sooner, since less answers have to be generated.

    Possible values: >= 1 and <= 200

    Default value: 30

Responses

OK

Schema
    model_versionstring

    model name and version (if any) of the used model for inference

    answers object[]

    list of answers. One answer per chunk.

  • Array [
  • answerstringrequired

    The answer generated by the model for a given chunk.

    scorefloatrequired

    quality score of the answer

    evidencestringrequired

    The evidence from the source document for the given answer.

  • ]

Authorization: http

name: tokentype: httpscheme: bearerdescription: Can be generated in your [Aleph Alpha profile](https://app.aleph-alpha.com/profile)
var client = new HttpClient();
var request = new HttpRequestMessage(HttpMethod.Post, "https://docs.aleph-alpha.com/qa");
request.Headers.Add("Accept", "application/json");
request.Headers.Add("Authorization", "Bearer <token>");
var content = new StringContent("{\n \"query\": \"Who likes Pizza?\",\n \"documents\": [\n {\n \"text\": \"Andreas likes Pizza.\"\n },\n {\n \"docx\": \"b64;base64EncodededWordDocument\"\n }\n ]\n}", null, "application/json");
request.Content = content;
var response = await client.SendAsync(request);
response.EnsureSuccessStatusCode();
Console.WriteLine(await response.Content.ReadAsStringAsync());
Request Collapse all
Auth
Parameters
— query
Body required
{
  "query": "Who likes Pizza?",
  "documents": [
    {
      "text": "Andreas likes Pizza."
    },
    {
      "docx": "b64;base64EncodededWordDocument"
    }
  ]
}