Privacy-preserving synthetic data can enhance the performance of both small and large language models (LLMs) in mobile applications like Gboard, improving user typing experiences while minimizing privacy risks. By utilizing federated learning and differential privacy, Google researchers have developed methods to synthesize data that mimics user interactions without accessing sensitive information, resulting in significant accuracy improvements and efficient model training. Ongoing advancements aim to further refine these techniques and integrate them into mobile environments.
+ privacy
synthetic-data ✓
federated-learning ✓
machine-learning ✓
language-models ✓