Implementing a simple task

Prerequisites

Ensure you have a project with PhariaInference SDK as a dependency as explained in Adding PhariaAI SDKs to your project.

Add required dependencies

from dotenv import load_dotenv
from pydantic import BaseModel

from pharia_inference_sdk.core import (
    CompleteInput,
    ControlModel,
    Pharia1ChatModel,
    Task,
    TaskSpan,
)

load_dotenv()

Define a task example: Tell a joke

Before defining a task, you first determine the requirements. This makes it easier to define the corresponding input and output in the form of Python classes.

We will define the following task:

  • I want the LLM to tell a joke about a specific topic

  • It must work for any topic

  • It must fail if the user provides no topic

Define input and output

class TellAJokeTaskInput(BaseModel):
    topic: str

class TellAJokeTaskOutput(BaseModel):
    joke: str

Define the task class

Once the input and output classes are defined, we create the scaffolding for the task:

class TellAJokeTask(Task[TellAJokeTaskInput, TellAJokeTaskOutput]):
    ...

Implement the task

Now that the task class is defined, we implement the desired logic. We start by adding the init method to our task class. Good practice is to add a model as input to use dependency injection. For this example, let’s use a ControlModel:

def __init__(self, model: ControlModel | None = None) -> None:
    self._model = model if model else Pharia1ChatModel()
All Aleph-Alpha chat models can be used as control models.

Now that we have the class initialised, we add the running part of the task. Each task receives an input and produces its execution trace span. To persist the trace, PhariaInference SDK offers a Traces that can be passed to the run method:

def do_run(
        self, input: TellAJokeTaskInput, task_span: TaskSpan
    ) -> TellAJokeTaskOutput:
        prompt_template = """Tell me a joke about the following topic:"""
        prompt = self._model.to_instruct_prompt(prompt_template, input.topic)
        completion_input = CompleteInput(prompt=prompt)
        completion = self._model.complete(completion_input, task_span)
        return TellAJokeTaskOutput(joke=completion.completions[0].completion)

The input and output of each task must be of the types defined for the task, in our cases TellAJokeTaskInput and TellAJokeTaskOutput. The TaskSpan is used to persist the span generated by the task.

Now, we consider the important piece of logic: we need to instruct the model to perform the task. We start by giving the basic instruction and then, through the use of self._model.to_instruct_prompt(prompt_template, input.topic), concatenate the proper model template and the input from the user.

As a last step, we launch the completion against AlephAlpha APIs with completion = self._model.complete(completion_input, task_span). The completion object contains a lot of information but, we are interested in the joke it produced. The joke is available by accessing completion.completions[0].completion.