Introducing tensor parallel inference and CUDA graph caching for adapter-based models
With version worker version api-worker-luminous:2024-07-08-0d839 of our luminous inference workers, we now support Tensor parallelism for all of our supported models and CUDA graph caching for adapter-based models.