PhariaAI SDKs

Introduction

Intelligence Layer SDK is deprecated!

For convenience and easier installation, the Intelligence Layer SDK has been split into three separate SDKs:

  • PhariaInference SDK - contains the core module of the Intelligence Layer. It provides a way to interact with the Aleph Alpha inference stack through composable tasks, and allows for tracing the task execution while developing the logic

  • PhariaStudio SDK - contains the evaluation modules and everything needed to connect your project to PhariaStudio

  • PhariaData SDK - provides PhariaDocument Index capabilities and connection to the PhariaData service

This documentation references the new PhariaAI SDKs; however, the same features remain available in the original Intelligence Layer.


Task

In the PhariaInference SDK, a task transforms an input parameter to an output parameter, similar to a function in mathematics:

Task: Input -> Output

In Python, this is realised by an abstract class with type-parameters and the abstract method do_run in which the actual transformation is implemented:

class Task(ABC, Generic[Input, Output]):

    @abstractmethod
    def do_run(self, input: Input, task_span: TaskSpan) -> Output:
        ...

Input and Output are normal Python datatypes that can be serialised from and to JSON. For this, the PhariaInference SDK relies on Pydantic. The types used are defined in the form of type-aliases PydanticSerializable.

The second parameter task_span is used for tracing, as described below.

do_run is the method that implements a concrete task and must be provided by you. It is executed by the external interface method run of a task:

class Task(ABC, Generic[Input, Output]):
    @final
    def run(self, input: Input, tracer: Tracer) -> Output:
      ...

The signatures of the do_run and run methods differ only in the tracing parameters.

Levels of abstraction

Even though the task concept is generic, the main purpose for a task is of course to make use of an LLM for a transformation. Tasks are defined at different levels of abstraction. Higher level tasks (also called use cases) reflect a typical user problem, whereas lower level tasks are used to interface with an LLM on a generic or technical level.

Typical examples of higher level tasks (use cases) are the following:

  • Answer a question based on a given document:
    QA: (Document, Question) → Answer

  • Generate a summary of a given document:
    Summary: Document → Summary

Examples of lower level tasks are the following:

  • Generate text based on an instruction and some context:
    Instruct: (Context, Instruction) → Completion

  • Chunk a text in smaller pieces at optimised boundaries (typically to make it fit into an LLM’s context size):
    Chunk: Text → [Chunk]

Trace

A task implements a workflow. It processes its input, passes it on to subtasks, processes the outputs of the subtasks, and builds its own output. This workflow can be represented in a trace. For this, a task’s run method takes a Tracer that takes care of storing details about the steps of this workflow, including the tasks that have been invoked along with their input and output and timing information.

To represent this tracing we use the following concepts:

  • A Tracer is passed to a task’s run method and provides methods for opening Spans or TaskSpans.

  • A Span is a Tracer that groups multiple logs and runtime durations together as a single, logical step in the workflow.

  • A TaskSpan is a Span that groups multiple logs together with the task’s specific input and output. An opened TaskSpan is passed to Task.do_run. Since a TaskSpan is a Tracer a do_run implementation can pass this instance on to run methods of subtasks.

The following diagram illustrates these relationships:

PhariaStudio - tracer

Each of these concepts is implemented in form of an abstract base class. The PhariaInference SDK provides several concrete implementations that store the actual traces in different backends. For each backend, each of the three abstract classes Tracer, Span, and TaskSpan needs to be implemented. The top-level Tracer implementations are the following:

  • The NoOpTracer is used when no tracing information is to be stored.

  • The InMemoryTracer stores all traces in an in-memory data structure and is helpful in tests or Jupyter notebooks.

  • The FileTracer stores all traces in a JSON file.

  • The OpenTelemetryTracer uses an OpenTelemetry Tracer to store the traces in an OpenTelemetry backend.

Evaluation

An important part of the PhariaStudio SDK is the tooling that helps to evaluate custom tasks. Evaluation helps to measure how well the implementation of a task performs given real world examples. The outcome of an entire evaluation process is an aggregated evaluation result that consists of metrics aggregated over all examples.

The evaluation process helps to:

  • Optimise a task’s implementation by comparing and verifying if changes improve the performance.

  • Compare the performance of one implementation of a task with that of other (already existing) implementations.

  • Compare the performance of models for a given task implementation.

  • Verify how changes to the environment (such as a new model version or new finetuning version) affect the performance of a task.

Dataset

The basis of an evaluation is a set of examples for the specific task-type to be evaluated. A single Example consists of:

  • An instance of the Input for the specific task.

  • Optionally, an expected output that can be anything that makes sense in context of the specific evaluation.

For example, the expected output might be, in the case of a classification, the correct classification result. In the case of a Q&A, it could contain a golden answer. However, if an evaluation is only about comparing results with the results from other runs, the expected output could be empty.

To enable reproducibility of evaluations, datasets are immutable. A single dataset can be used to evaluate all tasks of the same type, that is, with the same Input and Output types.

Evaluation process

The PhariaStudio SDK supports different kinds of evaluation techniques. Most important are:

  • Computing absolute metrics for a task where the aggregated result can be compared with results of previous result in a way that they can be ordered. Text classification could be a typical use case for this. In that case the aggregated result could contain metrics like accuracy which can easily compared with other aggregated results.

  • Comparing the individual outputs of different runs (all based on the same dataset) in a single evaluation process and produce a ranking of all runs as an aggregated result. This technique is useful when it is hard to come up with an absolute metric to evaluate a single output, but it is easier to compare two different outputs and decide which one is better. A typical use case might be summarisation.

To support these techniques, the PhariaStudio SDK differentiates between three consecutive steps:

  1. Run a task by feeding it all inputs of a dataset and collecting all outputs.

  2. Evaluate the outputs of one or several runs and produce an evaluation result for each example. Typically, a single run is evaluated if absolute metrics can be computed, and several runs are evaluated when the outputs of runs are to be compared.

  3. Aggregate the evaluation results of one or several evaluation runs into a single object containing the aggregated metrics. Aggregating over several evaluation runs supports amending a previous comparison result with comparisons of new runs without the need to re-execute the previous comparisons again.

The following table shows how these three steps are represented in code:

Step Executor Custom Logic Repository

1 Run

Runner

Task

RunRepository

2 Evaluate

Evaluator

EvaluationLogic

EvaluationRepository

3 Aggregate

Aggregator

AggregationLogic

AggregationRepository

The columns indicate the following:

  • "Executor" lists concrete implementations provided by the PhariaStudio SDK.

  • "Custom Logic" lists abstract classes that need to be implemented with the custom logic.

  • "Repository" lists abstract classes for storing intermediate results. The PhariaStudio SDK provides different implementations for these. See the next section for details.

Data storage

During an evaluation process a lot of intermediate data is created before the final aggregated result can be produced. To avoid repeating expensive computations if new results are to be produced based on previous ones, all intermediate results are persisted. To do this, the different executor classes use the following repositories:

  • The DatasetRepository offers methods to manage datasets. The Runner uses it to read all Examples of a dataset and feeds them to the Task.

  • The RunRepository is responsible for storing a task’s output (in the form of an ExampleOutput) for each Example of a dataset which is created when a Runner runs a task using this dataset. At the end of a run a RunOverview is stored containing some metadata concerning the run. The Evaluator reads these outputs given a list of runs it needs to evaluate to create an evaluation result for each Example of the dataset.

  • The EvaluationRepository enables the Evaluator to store the evaluation result (in the form of an ExampleEvaluation) for each example along with an EvaluationOverview. The Aggregator uses this repository to read the evaluation results.

  • The AggregationRepository stores the AggregationOverview containing the aggregated metrics on request of the Aggregator.

The following diagrams illustrate how the different concepts play together in case of the different types of evaluations:

PhariaStudio absolute evalutation
  1. The Runner reads the Examples of a dataset from the DatasetRepository and runs a Task for each Example.input to produce Outputs.

  2. Each Output is wrapped in an ExampleOutput and stored in the RunRepository.

  3. The Evaluator reads the ExampleOutputs for a given run from the RunRepository and the corresponding Example from the DatasetRepository and uses the EvaluationLogic to compute an Evaluation.

  4. Each Evaluation gets wrapped in an ExampleEvaluation and stored in the EvaluationRepository.

  5. The Aggregator reads all ExampleEvaluations for a given evaluation and feeds them to the AggregationLogic to produce an AggregatedEvaluation.

  6. The AggregatedEvalution is wrapped in an AggregationOverview and stored in the AggregationRepository.

The following diagram illustrates the more complex case of a relative evaluation:

PhariaStudio relative evalutation
  1. Multiple Runners read the same dataset and produce the corresponding Outputs for different Tasks.

  2. For each run, all Outputs are stored in the RunRepository.

  3. The Evaluator gets as input previous evaluations (that were produced on the basis of the same dataset, but by different Tasks) and the new runs of the current task.

  4. Given the previous evaluations and the new runs the Evaluator can read the ExampleOutputs of both the new runs and the runs associated with previous evaluations, collect all that belong to a single Example and pass them along with the Example to the EvaluationLogic to compute an Evaluation.

  5. Each Evaluation gets wrapped in an ExampleEvaluation and is stored in the EvaluationRepository.

  6. The Aggregator reads all ExampleEvaluations from all involved evaluations and feeds them to the AggregationLogic to produce an AggregatedEvaluation.

  7. The AggregatedEvalution is wrapped in an AggregationOverview and stored in the AggregationRepository.