Text-to-LoRA (T2L) is a hypernetwork that enables the instant adaptation of large language models to specific tasks using only natural language descriptions, eliminating the need for extensive fine-tuning and dataset curation. Trained on various pre-existing LoRA adapters, T2L can generate task-specific adapters in a single forward pass, demonstrating performance comparable to traditional methods while significantly reducing computational requirements and allowing zero-shot generalization to new tasks.
machine-learning ✓
+ transformers
hypernetwork ✓
model-adaptation ✓
foundation-models ✓