Skip to main content

Few-Shot Prompting

For more complicated tasks, or those that require a specific format, you may have to explicitly show how to properly continue your prompt – by providing one or more examples. This is called few-shot learning (or one-shot learning in the case of just a single example). Let’s have a look at how this plays out in practice.

You’re thinking about where to go for your next vacation. Therefore, you’d like to have a list of possible travel destinations. You may ask the model to assist you in this matter. Before, it has proven effective to simply tell the model what to do. Let’s first try this zero-shot approach yet again:

I want to travel to a location where I can enjoy sun, sports and landscapes. Where could I go?

Think about how you would answer this query. You may get the gist and list a few possible locations that satisfy the given constraints. On the other hand, you could reply with a lengthy report of that one beautiful place you once visited. You could also say “I don’t know”. Anyhow, there are a dozen different ways in which you could answer this question, none of which are strictly right or wrong. Accordingly, the model could output:

I’ve been to the Caribbean, and I’ve been to the Mediterranean, but I’ve never been to California.

Well, that’s not quite what we were hoping for. Let’s try to improve this prompt. To maximize our chances that the model outputs a list with cool vacation destinations, we must show examples of how to perform this task, i.e. use few-shot learning.

Recommend a travel destination based on the keywords. 
Keywords: Historic City, Scenic Views, Castle 
Travel Destination: Heidelberg, Germany. Heidelberg is one of the most beautiful cities in Germany, with a historic "Altstadt" and a castle. 
Keywords:  Lake, Forrest, Fishing 
Travel Destination: Lake Tahoe, California. This famous lake in the Sierra Nevada mountains offers an amazing variety of outdoor activities. 
Keywords: Museums, Food, History 
Travel Destination: Rome, Italy. Rome is a city full of museums, historic sites and excellent restaurants. 
Keywords: Beaches, Party, Hiking 
Travel Destination: Mallorca, Spain. This island is famous for its sandy beaches, hilly landscape and vibrant nightlife. 
Keywords: Sun, Sports, Landscapes 
Travel Destination: Phuket, Thailand. This island in the Andaman Sea is known for its pristine beaches and wide variety of outdoor activities.

This is much better because the model understood the underlying structure of your task. To write good prompts use either instructions, examples or both to convey what you wish to do. Find the Jumpstart section for more detailed prompting advice for specific tasks.


Let’s go over some more tips and tricks for prompt design. In this case, we use a Q&A example to illustrate them. However, they are just as useful for all other kinds of prompting.

  1. Structure your prompt. For example, instead of asking:
    How many people live in New York City? How many people live in the United States?
    Q: How many people live in New York City?
    A: There are about 8.5 million people in the city.
    In case you are using few-shot learning, try to clearly separate the examples by using separators such as "###":
    Q: How long is the river Nile?
    A: It's roughly 6650 kilometers long.
    Q: How many people live in New York City?
    A: About 8.5 million people.
  2. You should not end your prompt on a space character. The model will most likely return nonsense.
    Q: How many people live in New York City?
    A: B:
  3. Check your prompts for mistakes. Spelling mistakes, grammatical errors or even double spaces may adversely affect your results.
    Q: How many people live in New York Cty?
    A: There are about 8.5 million people in the state of New York.
    In this example, the model did not recognize the word "city" correctly, thus it returned “state of New York” accompanied with the number of inhabitants of New York City. Thus, our answer is now wrong.