3 links
tagged with all of: language-models + privacy
Click any tag below to further narrow down your results
Links
Apple has unveiled updates to its on-device and server foundation language models, enhancing generative AI capabilities while prioritizing user privacy. The new models, optimized for Apple silicon, support multiple languages and improved efficiency, incorporating advanced architectures and diverse training data, including image-text pairs, to power intelligent features across its platforms.
Privacy-preserving synthetic data can enhance the performance of both small and large language models (LLMs) in mobile applications like Gboard, improving user typing experiences while minimizing privacy risks. By utilizing federated learning and differential privacy, Google researchers have developed methods to synthesize data that mimics user interactions without accessing sensitive information, resulting in significant accuracy improvements and efficient model training. Ongoing advancements aim to further refine these techniques and integrate them into mobile environments.
FlexOlmo introduces a new paradigm for language model training that enables data owners to collaborate without relinquishing control over their data. This approach allows for asynchronous contributions, maintains data privacy, and provides flexible data use, addressing the challenges of traditional AI development. By leveraging a mixture-of-experts architecture, FlexOlmo enhances model performance while minimizing the risk of data extraction.