LoRACode introduces a parameter-efficient fine-tuning method using Low-Rank Adaptation (LoRA) to improve code embeddings for semantic code search. The approach significantly reduces trainable parameters and enhances performance in code retrieval tasks, achieving notable gains in Mean Reciprocal Rank for both Code2Code and Text2Code searches across various programming languages. The authors provide their code and pre-trained models to support further research in this domain.
Fed-SB introduces a novel approach for federated fine-tuning of large language models using Low-Rank Adaptation (LoRA), addressing the challenges of high communication costs and performance degradation in traditional methods. By leveraging a small square matrix to optimize updates, Fed-SB significantly reduces communication costs while enhancing performance across various reasoning tasks, establishing a new balance between efficiency and effectiveness in both private and non-private settings.