Announcing token stream support for complete endpoint and Python Client
In version api-scheduler:2024-10-01-00535 of our inference stack API-scheduler, we added a new stream property to the /complete endpoint to enable streamed token generation.
In version api-scheduler:2024-10-01-00535 of our inference stack API-scheduler, we added a new stream property to the /complete endpoint to enable streamed token generation.
Back in April, we released an initial version of our Summarization endpoint, which allowed for summarizing text using our language models.
In the last few weeks we introduced a number of features to improve your experience with our models. We hope they will make it easier for you to test, develop, and productionize the solutions built on top of Luminous. In this changelog we want to inform you about the following changes:
We're excited to announce that we have added async support for our Python client! You can now upgrade to v2.5.0, and import AsyncClient to get started making requests to our API in async contexts.