Skip to main content

Few-Shot Prompting

To effectively utilize a language model for a specific application, it's important to leverage its inherent capability to comprehend and process natural language. Given that these models are fundamentally designed to understand language, a straightforward approach to task execution could involve directly instructing the model in natural language to carry out the desired task.

In general, we recommend using zero-shot prompting with the Luminous-*-control models. Still, there are situations where it makes more sense to clearly demonstrate the task you want to be solved via few-shot prompting. For this, you should use the non-control models (also referred to as vanilla models).

Let's say, for example, that you wanted to ask the model for vacation tips...

I want to travel to a location where I can enjoy sun, sports and landscapes. Where could I go?

Think about how you would answer this query. You may get the gist and list a few possible locations that satisfy the given constraints. On the other hand, you could reply with a lengthy report of that one beautiful place you once visited. You could also say “I don’t know”. Anyhow, there are a dozen different ways in which you could answer this question, none of which are strictly right or wrong. Accordingly, Luminous-base outputs:

I’ve been to the Caribbean, and I’ve been to the Mediterranean, but I’ve never been to California.

Well, that’s not quite what we were hoping for. Let’s try to improve this prompt. To maximize our chances that the model outputs a list with cool vacation destinations, we must show examples of how to perform this task. Generally, large language models work best when instructed this way. This prompting method is called few-shot (or one-shot in the case of just a single example). Let’s have a look at how this plays out in practice.

Recommend a travel destination based on the keywords. 
Keywords: Historic City, Scenic Views, Castle 
Travel Destination: Heidelberg, Germany. Heidelberg is one of the most beautiful cities in Germany, with a historic "Altstadt" and a castle. 
Keywords:  Lake, Forrest, Fishing 
Travel Destination: Lake Tahoe, California. This famous lake in the Sierra Nevada mountains offers an amazing variety of outdoor activities. 
Keywords: Museums, Food, History 
Travel Destination: Rome, Italy. Rome is a city full of museums, historic sites and excellent restaurants. 
Keywords: Beaches, Party, Hiking 
Travel Destination: Mallorca, Spain. This island is famous for its sandy beaches, hilly landscape and vibrant nightlife. 
Keywords: Sun, Sports, Landscapes 
Travel Destination: Phuket, Thailand. This island in the Andaman Sea is known for its pristine beaches and wide variety of outdoor activities.

This looks much better! By instructing the model this way, we were able to more clearly illustrate the output we would like to generate.


Let’s go over some more tips and tricks for prompt design. In this case, we use a Q&A example to illustrate them. However, they are just as useful for all other kinds of prompting.

  1. Structure your prompt. For example, instead of asking:
    How many people live in New York City? How many people live in the United States?
    Q: How many people live in New York City?
    A: There are about 8.5 million people in the city.
    In case you are using few-shot learning, try to clearly separate the examples by using separators such as "###":
    Q: How long is the river Nile?
    A: It's roughly 6650 kilometers long.
    Q: How many people live in New York City?
    A: About 8.5 million people.
  2. You should not end your prompt on a space character. The model will most likely return nonsense.
    Q: How many people live in New York City?
    A: B:
  3. Check your prompts for mistakes. Spelling mistakes, grammatical errors or even double spaces may adversely affect your results.
    Q: How many people live in New York Cty?
    A: There are about 8.5 million people in the state of New York.
    In this example, the model did not recognize the word "city" correctly, thus it returned “state of New York” accompanied with the number of inhabitants of New York City. Thus, our answer is now wrong.