📄️ Introduction
If you just want to get started with our API, you've come to the right place. Here, you will find minimal coding examples for each task currently available via our Python client.
📄️ Complete
Our complete-endpoint is the basic interface to our models. You can send a prompt, which can be any combination of texts and images, to generate a (text) completion. If the concepts of prompting and completion are new to you, please refer to this section in our documentation.
📄️ Evaluate
With our `evaluate`-endpoint you can score the likelihood of pre-defined completions. This is useful if you already know the output you would expect and which completion our models would return. The major advantage is that the `evaluate`-endpoint is significantly faster than the `complete`-endpoint.
📄️ Embed
With our `embed`-endpoint you can embed any prompt into vector space.
📄️ Semantic Embed
With our `semanticembed`-endpoint you can create semantic embeddings for your text. This functionality can be used in a myriad of ways. For more information please check out our blog-post on Luminous-Explore, introducing the model behind the `semanticembed`-endpoint.
📄️ Summarize
Our `summarize`-endpoint can be used to generate summaries for longer texts.
📄️ Q&A
The `qa`-endpoint can be used to answer questions about one or more documents. To do this, you must specify both the document(s) and a question.
📄️ (De)-Tokenize
With the `tokenize`-endpoint you can use our own tokenizer to tokenize your texts for further use. Next to that you can also detokenize these texts with the `detokenize`-endpoint.
📄️ Explain
With our `explain`-endpoint you can get an explanation of the model's output. In more detail, we return how much the log-probabilites of the already generated completion would change if we supress indivdual parts (based on the granularity you chose) of a prompt. Please refer to this part of our documentation if you would like to know more about our explainability method in general.