How to run thousands of LoRA language models on a single GPU

featured image
Llama on GPU
Image created with Bing Image Creator

This article is part of our coverage of the latest AI research.

Fine-tuning large language models (LLM) has become a pivotal resource for companies aiming to tailor AI applications to fit specific needs. Despite their potential, the high costs of setting up and deploying these LLM programs prevent their widespread use.

Parameter efficiency fine-tuning (PEFT) methods help overcome this drawback by significantly reducing overhead by adjusting only a small portion of model parameters. Among these features, low-order adaptation (LoRA) stands out for its cost-effectiveness and unique ability to operate as a detachable switch, separate from the underlying LLM.

S-LoRA, a new framework developed by researchers from Stanford University and UC Berkeley, extends the capabilities of LoRA. The S-LoRA hack allows the simultaneous operation of hundreds, even thousands, of LoRA switches on a single GPU. This scalability not only reduces operational costs, but also paves the way for wider access to custom LLM applications, making it possible to have a single, finely tuned model per user at negligible cost.

How does laura work?

Low-rank adaptation (LoRA)

Traditionally, fine-tuning LLM programs for new applications – or “downstream tasks” – involves updating many layers and parameters within a pre-trained model. Given that an MBA typically contains billions of parameters, this method requires significant computational power. LoRA defines and adjusts the minimum subset of LLM parameters that are specifically relevant to the downstream task. This targeted approach significantly reduces the size of trainable parameters, and thus the memory footprint.

LoRA reduces the trainable parameters by several orders of magnitude while keeping accuracy levels on par with full parameter fine-tuning. This balance between efficiency and effectiveness has led to widespread adoption within the AI ​​community, with numerous LoRA transformers being developed for various masters and pre-trained propagation models.

The original LoRA paper proposed incorporating fine-tuned low-rank weights back into the underlying MBA. However, an alternative and increasingly popular approach is to keep the LoRA weights as independent switches. These adapters can be dynamically connected to the underlying model during inference. The weights of the LoRA transformer and the basic model are calculated independently and then combined. The additional computational overhead is minimal due to the small size of the LoRA converter.

This modular design allows multiple LoRA transformers to coexist, each fine-tuned using different data sets, occupying only a small portion of the main model’s memory, and representing a distinct, fine-tuned version of the model.

LoRA adapters can be connected to the model at runtime depending on the application.

Challenges of scaling LoRA

Running multiple LoRA models alongside a fully parameterized LLM presents several technical challenges. Memory management is the main concern; The limited GPU memory capacity limits the number of LoRA switches that can be active simultaneously with the main model. In addition, LLM servers typically use caching and batching techniques to process many requests collectively and enhance throughput. However, the variable sizes of LoRA transformers and their computations separated from the underlying model introduce memory and computational complexity that can hinder the speed of inference.

Furthermore, integration of LoRA switches becomes more complex with larger LLMs that require parallel processing across multiple GPUs. The additional weights and computations from LoRA switches complicate the already complex task of synchronizing operations across multiple GPUs, posing a significant challenge to maintaining operations efficiency.

S-Laura

S-LoRA architecture

S-LoRA, which stands for “scalable LoRA,” overcomes the hurdles of operating multiple switches simultaneously. It offers three key innovations that collectively simplify the process.

The first is the dynamic memory management system. This system efficiently loads all LoRA weights into main memory and dynamically loads and unloads them from the GPU as needed to handle incoming batch requests for fine-tuned models. This approach ensures that the right resources are allocated at the right time, optimizing memory usage and avoiding long delays in responding to requests.

Second, S-LoRA includes a unified relay system. This system manages both the query cache and adapter weights, allowing the server to process hundreds or even thousands of bulk queries. It does this without causing memory fragmentation, a common bottleneck that can force the model to recalculate extensive key-value (KV) cache values.

The third innovation is a new tensor parallel system designed specifically for batch LoRA inference, facilitating the use of multi-GPU setups for large switch models. This system ensures that the parallel processing of LoRA switches across GPUs is efficient and effective.

These features allow S-LoRA to serve multiple LoRA switches on a single GPU or across multiple GPUs with minimal overhead. In testing, S-LoRA was used to serve different versions of the open source Llama LLM across different GPU configurations. The results were impressive: S-LoRA was able to seamlessly handle thousands of LoRA switches on a single GPU, adding only minor overhead.

Compared to the Hugging Face PEFT library, an efficient parameter fine-tuning tool, S-LoRA boosted throughput by up to 30X. Compared to vLLM, a high-throughput serving system with basic support for LoRA, S-LoRA improved throughput by up to 4X and significantly increased the number of switches served.

S-LoRA can be paired with in-context learning or retrieval augmented generation (RAG), providing users with a custom adapter and integrating their recent data into the LLM router as context.

The S-LoRA code is publicly available on GitHub, and there are plans to integrate it with widely used LLM service frameworks. This integration will enable companies to easily integrate S-LoRA into their applications.
Although S-LoRA represents a major advance, it is not alone in this field. Predibase’s LoRAX is another framework designed to serve LoRA switches at scale, providing seamless operation of hundreds of LoRA models without the complexities of memory management and model switching.

Running thousands of LoRA language models on a single GPU can be a daunting task, but with the right approach and tools, it is certainly achievable. By effectively managing resources and optimizing workflows, it is possible to harness the power of a single GPU to run multiple LoRA language models simultaneously. In this article, we will explore various techniques and strategies for maximizing the efficiency of GPU utilization, as well as considerations for scaling up the number of models to be run. Whether you are a researcher, developer, or data scientist, this guide will provide insights into how to effectively run multiple LoRA language models on a single GPU.

Previous Post Next Post

Formulaire de contact