Skip to main content

Implement a simple task

Prerequisites for implementing a task

Make sure you have a Poetry project with Intelligence Layer as a dependency as explained here

Add the necessary dependencies

from dotenv import load_dotenv
from pydantic import BaseModel

from intelligence_layer.core import (
CompleteInput,
ControlModel,
NoOpTracer,
Pharia1ChatModel,
Task,
TaskSpan,
)
from intelligence_layer.examples import SingleChunkQa, SingleChunkQaInput

load_dotenv()

How to define a task

To define a task, it is important to start understanding the requirements. After that, it will be easier to define the corresponding input and output in the form of Python Classes.

Example

Let's define the following task:

  • I want the LLM to tell a joke about a specific topic
  • It should work for any topic
  • It should fail if there is no topic given by the user

Define input and output

class TellAJokeTaskInput(BaseModel):
topic: str

class TellAJokeTaskOutput(BaseModel):
joke: str

Define the task class

Once the input and output classes are defined, let's create the scaffolding for the task.

class TellAJokeTask(Task[TellAJokeTaskInput, TellAJokeTaskOutput]):
...

How to implement a task

Now that the task class is defined, let's implement the desired logic. To do so, let's start by adding the __init__ method to our task class. Good practice is to add a model as input to use dependency injection. For this example, let's use a ControlModel.

def __init__(self, model: ControlModel | None = None) -> None:
self._model = model if model else Pharia1ChatModel()
tip

All Aleph-Alpha Chat Models can be used as Control Models.

Now that we have the class initialized, let's add the running part of the task. Each task receives an input and produces its execution trace span. To persist the trace, Intelligence Layer offers a Traces that can be passed to the run method. Let's now see a possible implementation:

def do_run(
self, input: TellAJokeTaskInput, task_span: TaskSpan
) -> TellAJokeTaskOutput:
prompt_template = """Tell me a joke about the following topic:"""
prompt = self._model.to_instruct_prompt(prompt_template, input.topic)
completion_input = CompleteInput(prompt=prompt)
completion = self._model.complete(completion_input, task_span)
return TellAJokeTaskOutput(joke=completion.completions[0].completion)

The input and output of each task need to be of the types defined for the task, in our cases TellAJokeTaskInput and TellAJokeTaskOutput. The TaskSpan is used to persist the span generated by the task.

Now, let's check at the important piece of logic: we need to instruct the model for performing the task. We start by giving the basic instruction and then, through the use of self._model.to_instruct_prompt(prompt_template, input.topic), concatenate the the proper model template and the input from the user.

As last step, we launch the completion against AlephAlpha APIs with completion = self._model.complete(completion_input, task_span). The completion object contains multiple informations but, for now, we are mostly interested in the joke it produced. The joke is available by accessing completion.completions[0].completion.

That's it, the task is defined.