6 links
tagged with all of: machine-learning + privacy
Click any tag below to further narrow down your results
Links
The EdgeAI for Beginners course offers a comprehensive introduction to deploying artificial intelligence on edge devices, emphasizing practical applications, privacy, and real-time performance. It covers small language models, optimization techniques, and production strategies, with hands-on workshops and resources for various technical roles across multiple industries. Participants can follow a structured learning path and engage with a community of developers for support.
Anonymization is crucial for transforming sensitive data into useful resources for machine learning, allowing models to generalize without memorizing specific data points. Recent advances in privacy-enhancing technologies, including frameworks like Private Evolution and PAC Privacy, emphasize the importance of creating effective synthetic datasets and minimizing the risk of data reconstruction. These innovations shift the focus from compliance to responsible data usage while ensuring robustness in model performance.
Apple has unveiled updates to its on-device and server foundation language models, enhancing generative AI capabilities while prioritizing user privacy. The new models, optimized for Apple silicon, support multiple languages and improved efficiency, incorporating advanced architectures and diverse training data, including image-text pairs, to power intelligent features across its platforms.
Privacy-preserving synthetic data can enhance the performance of both small and large language models (LLMs) in mobile applications like Gboard, improving user typing experiences while minimizing privacy risks. By utilizing federated learning and differential privacy, Google researchers have developed methods to synthesize data that mimics user interactions without accessing sensitive information, resulting in significant accuracy improvements and efficient model training. Ongoing advancements aim to further refine these techniques and integrate them into mobile environments.
Fed-SB introduces a novel approach for federated fine-tuning of large language models using Low-Rank Adaptation (LoRA), addressing the challenges of high communication costs and performance degradation in traditional methods. By leveraging a small square matrix to optimize updates, Fed-SB significantly reduces communication costs while enhancing performance across various reasoning tasks, establishing a new balance between efficiency and effectiveness in both private and non-private settings.
FUSED is a proposed method for federated unlearning that addresses challenges such as indiscriminate knowledge removal and the irreversibility of unlearning. It utilizes selective sparse adapters to overwrite sensitive knowledge without altering original model parameters, making unlearning both reversible and cost-effective. Experimental results indicate that FUSED outperforms existing methods while significantly reducing unlearning costs.