3 links
tagged with all of: privacy + machine-learning + federated-learning
Click any tag below to further narrow down your results
Links
Privacy-preserving synthetic data can enhance the performance of both small and large language models (LLMs) in mobile applications like Gboard, improving user typing experiences while minimizing privacy risks. By utilizing federated learning and differential privacy, Google researchers have developed methods to synthesize data that mimics user interactions without accessing sensitive information, resulting in significant accuracy improvements and efficient model training. Ongoing advancements aim to further refine these techniques and integrate them into mobile environments.
Fed-SB introduces a novel approach for federated fine-tuning of large language models using Low-Rank Adaptation (LoRA), addressing the challenges of high communication costs and performance degradation in traditional methods. By leveraging a small square matrix to optimize updates, Fed-SB significantly reduces communication costs while enhancing performance across various reasoning tasks, establishing a new balance between efficiency and effectiveness in both private and non-private settings.
FUSED is a proposed method for federated unlearning that addresses challenges such as indiscriminate knowledge removal and the irreversibility of unlearning. It utilizes selective sparse adapters to overwrite sensitive knowledge without altering original model parameters, making unlearning both reversible and cost-effective. Experimental results indicate that FUSED outperforms existing methods while significantly reducing unlearning costs.