Skip to main content

Execute a task and Submit its trace to PhariaStudio

Prerequisites

Make sure you have the task logic implemented in a Jupyter Notebook, a Python file or script as explained here.

Add the necessary dependencies

from pharia_inference_sdk.core import (
InMemoryTracer
)
from pharia_studio_sdk.connectors import (
StudioClient
)

Initialize the Studio Client

Let's initialize the Studio Client, and create a project.

studio_client = StudioClient(project="Tell me a joke", create_project=True)
tip

It is possible to create a project from PhariaStudio and use it's name to initialize the client in your code.

Once the project is created, it will be available in the list of projects in the PhariaStudio landing page.

studio-landing-page.png

After selecting the project, you can navigate to the Debug section in the sidebar to land in the default, empty project screen. In this screen, you can find the code snippet to connect your trace to Studio.

studio-project-view.png

Model Initialization

From the PhariaStudio Playground, let's obtain the name of the model we want to use for our task by selecting the model from the dropdown, and clicking on the copy button.

studio-copy-model-name.png

Moving back to the code, let's use the name to initialize the model (e.g. pharia-1-llm-7b-control):

model = Pharia1ChatModel("pharia-1-llm-7b-control")

Execute the task

Remaining on the code, we need to initialize the task with the model we just created and create the input.

task = TellAJokeTask(model)
taskInput = TellAJokeTaskInput(topic="Software Engineer")

In order to visualize the trace in PhariaStudio, we need to initialize the tracer as follows:

tracer = InMemoryTracer()

At last, let's execute the task!

task.run(taskInput, tracer)

Submit the trace to Studio

The last step before switching completely to Studio is submitting the trace.

studio_client.submit_from_tracer(tracer)
OTLP Traces Support

Studio (since Pharia version 1.251000.0) also supports OTLP traces and users can instrument traces from their AI applications using any OpenTelemetry tracing SDK (e.g., opentelemetry-python, @opentelemetry/api, go.opentelemetry.io/otel, opentelemetry-java, etc.). For now, only the HTTP+Protobuf ingestion endpoint is supported:

<pharia-studio-url>/api/projects/{project_id}/traces_v2

Visualize the trace

Once the trace is submitted, the Debug section for the project will look like the following image.

studio-traces.png

The trace takes the name from the task name and shows the execution time, input output of the task, latency and whether the execution was successful or not.

Clicking on the trace, it is possible to access the detail view, where all the subtasks are shown. In this case, we just have a completion.

studio-trace-details.png

Clicking on each subtask, the details panel will appear, giving more information on input/output of each subtask. In this case, it is possible to see the raw prompt submitted to the model for completion.

studio-trace-subtask.png