ShieldGemma 2 is a 4 billion parameter image safety classification model developed by Google, designed to evaluate images against specific safety policies concerning explicit, dangerous, and violent content. The model is accessed through Hugging Face and is intended for use as a content moderation tool in AI applications, with robust training on various datasets to enhance its efficacy. Users must agree to Google's usage license to access the model's resources.
image-safety ✓
moderation ✓
google ✓
gemma ✓
ai ✓