LoRACode introduces a parameter-efficient fine-tuning method using Low-Rank Adaptation (LoRA) to improve code embeddings for semantic code search. The approach significantly reduces trainable parameters and enhances performance in code retrieval tasks, achieving notable gains in Mean Reciprocal Rank for both Code2Code and Text2Code searches across various programming languages. The authors provide their code and pre-trained models to support further research in this domain.
code-embeddings ✓
machine-learning ✓
low-rank-adaptation ✓
semantic-search ✓
fine-tuning ✓