📄️ What is Luminous?
Luminous Model Family
📄️ Interacting with Luminous models
You can interact with a Luminous model by sending it a text. We call this a prompt. It will then produce text that continues your input and return it to you. This is what we call a completion. Generally speaking, our models attempt to find the best continuation for a given input. Practically, this means that the model first recognizes the style of the prompt and then attempts to continue it accordingly. Depending on the task at hand, the structure and content of the prompt are essential to generating completions that match the input task. By using a set of techniques, which we lay out in the following sections, you can instruct our models to solve a wide variety of text-based tasks. Note that increasing task complexity may require larger models. Find examples for tasks of varying complexities below:
📄️ Zero-Shot Prompting
For certain tasks, simply providing a natural language instruction to the model may be sufficient to obtain a good completion. This is called zero-shot prompting. Let’s illustrate this using an example.
📄️ Few-Shot Prompting
For more complicated tasks, or those that require a specific format, you may have to explicitly show how to properly continue your prompt – by providing one or more examples. This is called few-shot learning (or one-shot learning in the case of just a single example). Let’s have a look at how this plays out in practice.
📄️ Control Models Prompting
Our Luminous-control models have been optimized to work well with zero-shot instructions. This means that they do not necessarily need a set of examples like in few-shot learning. Still, as we previously learned, structures can help achieve the desired completion of the model. For control models, the prompt should follow the following pattern:
📄️ Tokens
Language models can only work with data that is in a digestible format. In essence, such models comprise many large matrices containing floating point numbers. Matrices do not run on characters but on numbers. Therefore, sets of characters are “translated” into sets of integers, so-called tokens.
📄️ Model Card Luminous
The Luminous series is a family of large language models. Large language models are powerful technological tools that can process and produce text. These capabilities emerge during model “training” where the model is exposed to significant amounts of human text data. Similar to a person who deliberately absorbs information while reading a whole library and half of the internet, large language models acquire structural understanding (and not necessarily also knowledge) of language and accumulated information about the world.