VaultGemma is a new 1B-parameter language model developed by Google Research that incorporates differential privacy from the ground up, addressing the inherent trade-offs between privacy, compute, and utility. The model is designed to minimize memorization of training data while providing robust performance, and its training was guided by newly established scaling laws for differentially private language models. Released alongside its weights, VaultGemma aims to foster the development of safe and private AI technologies.