Skip to main content

Intelligence Layer Release 1.0.0

· 4 min read

We're happy to announce the public release of our Intelligence Layer-SDK.

The Aleph Alpha Intelligence Layer️ offers a comprehensive suite of development tools for crafting solutions that harness the capabilities of large language models (LLMs). With a unified framework for LLM-based workflows, it facilitates seamless AI product development, from prototyping and prompt experimentation to result evaluation and deployment.

The key features of the Intelligence Layer are:

  • Composability: Streamline your journey from prototyping to scalable deployment. The Intelligence Layer SDK offers seamless integration with diverse evaluation methods, manages concurrency, and orchestrates smaller tasks into complex workflows.
  • Evaluability: Continuously evaluate your AI applications against your quantitaive quality requirements. With the Intelligence Layer SDK you can quickly iterate on different solution strategies, ensuring confidence in the performance of your final product. Take inspiration from the provided evaluations for summary and search when building a custom evaluation logic for your own use case.
  • Traceability: At the core of the Intelligence Layer is the belief that all AI processes must be auditable and traceable. We provide full observability by seamlessly logging each step of every workflow. This enhances your debugging capabilities and offers greater control post-deployment when examining model responses.
  • Examples: Get started by following our hands-on examples, demonstrating how to use the Intelligence Layer SDK and interact with its API.

Artifactory Deployment

You can access and download the SDK via the JFrog artifactory. In order to make use of the SDK in your own project, you have to add it as a dependency to your poetry setup via the following two steps.

First, add the artifactory as a source to your project via

poetry source add --priority=explicit artifactory https://alephalpha.jfrog.io/artifactory/api/pypi/python/simple

Second, add the Intelligence Layer to the project

poetry add --source artifactory intelligence-layer

What's new with version 1.0.0

Llama support

With the Llama2InstructModel and the Llama3InstructModel, we now also support using Llama2 and Llama3 models in the Aleph Alpha IL. These InstructModels can make use of the following options:

Llama2InstructModelLlama3InstructModel
llama-2-7b-chatllama-3-8b-instruct
llama-2-13b-chatllama-3-70b-instruct
llama-2-70b-chat

DocumentIndexClient

The DocumentIndexClient has been enhanced and now offers new features. You are now able to create your own index in a namespace and assign/delete it to/from individual collections. The DocumentIndex now chunks and embeds all documents in a collection for each index assigned to this collection. The full extent of its newly added features include:

  • create_index
  • index_configuration
  • assign_index_to_collection
  • delete_index_from_collection
  • list_assigned_index_names

Miscellaneous

Apart from the major changes, we introduced some minor features, such as:

  • ExpandChunks-task now caches chunked documents by ID
  • DocumentIndexRetriever now supports index_name
  • Runner.run_dataset now has a configurable number of workers via max_workers and defaults to the previous value, which is 10.
  • In case a BusyError is raised during a complete the LimitedConcurrencyClient will retry until max_retry_time is reached.

Breaking Changes

The HuggingFaceDatasetRepository now has a parameter caching, which caches examples of a dataset once loaded. This is True by default and drastically reduces network traffic. For a non-breaking change, set it to False.

The MultipleChunkRetrieverQa does not take insert_chunk_size-parameter anymore but now receives a ExpandChunks-task.

The issue_cassification_user_journey notebook moved to its own repository.

The Trace Viewer has been exported to its own repository and can be accessed via the JFrog artifact here.

We also removed the TraceViewer from the repository, but it is still accessible in the Docker container.

Fixes

HuggingFaceRepository no longer is a dataset repository. This also means that HuggingFaceAggregationRepository no longer is a dataset repository.

The input parameter of the DocumentIndex.search()-function now has been renamed from index to index_name