Skip to main content

Core Concepts

There is a few concepts which arise when talking about the Kernel

Skill

The first of these primitives is a Skill. A Skill is a user-defined function that follows the request/response pattern: It takes some input, and returns some output.

A Skill has a well-defined schema for input and output. What makes it different from a normal Serverless or FaaS (Function as a Service) function is that because it is being run in the context of the Kernel, it will have access to the Cognitive System Interface (CSI), to be defined below.

When this Skill will get executed, and how, is up to the Kernel, which allows the engineer to focus on the business and AI logic of the Skill at hand.

WASM Component

On a more technical level, when you build your Skill, it is compiled to a WASM Component. Under the hood, we use componentize-py to do that. componentize-py resolves the imports of a Skill module, so any package you import in you Skill will also be included in the Component. However, non-native dependencies (e.g. NumPy which is written in C) only work if the wheels for WASI targets are available at build-time. For Pydantic, our SDK resolves this under the hood for you.

CSI

Similar to how an operating system provides functionality to applications, the Cognitive System Interface (CSI) is the set of functions provided to the user code when it is run within the Kernel environment. The functionality of the CSI is focussed around the needs of AI methodology, such as LLM inference, vector search, and data access.

By providing a common interface to these tools, it provides the opportunity for the user code to describe the intended interaction and outcome in their code, and the Kernel is able to take care of the complexity of providing it. For example, authentication is not part of the CSI interface, but is handled by the Kernel, which will authenticate all CSI calls with the token provided in the request. To make this interface available at development time, the SDK provides a DevCSI.

Testing

When Skills are run in the Kernel, the CSI is provided via an Application Binary Interface. This interface is defined via the WASM Interface Type (WIT) language. For development and debugging, Skills can also run in a local Python environment. The CSI which is available to the Skill at runtime can be substituted with a DevCSI which is backed by HTTP requests against a running instance of the Kernel. Developers can write tests, step through their Python code and inspect the state of variables.

Tracing

The Kernel automatically traces Skills and all interactions with the CSI (logs are currently not available). When developing Skills, the developer does not need to worry about setting up tracing. The Kernel can be configured to export traces to an OpenTelemetry compatible backend. At development time, the DevCSI can be configured to export traces to Pharia Studio, where they can be visualized.

Namespaces

The Kernel has the concept of namespaces, which are used to group Skills. Namespaces are configured by the operator of the Kernel. For each namespace, the operator specifies two things:

  1. An OCI registry to load Skills from (Skills are not containers. Yet, we still publish them as OCI images to registries)
  2. A namespace configuration (a toml file, typically checked into a Git repository)

This allows teams to deploy their Skills in self-service, after a namespace has been configured by the operator. Permissions for the registry and the namespace configuration could be configured in such a way that only team members can deploy Skills to the namespace. In order to make a Skill available in the Kernel two criteria need to be met, the Skill must be deployed as a component to an OCI registry and the Skill must be configured in the namespace configuration.

You can check out pharia-kernel.namespaces in the values.yaml of the respective deployment. For deployment, configure the pharia-skill CLI tool with environment variables to point to the correct registry for the namespace you want to deploy to.