2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Google has updated its Gemini app to allow users to verify if videos were created by its AI. By uploading a video, users can check for a digital watermark that indicates AI involvement. However, this tool only works for content generated by Google's own systems.
If you do, here's more
Google has expanded its Gemini app to include a tool for verifying AI-generated videos. Users can upload a video and ask if it was created using Google’s AI technology. The app checks for SynthID, a proprietary digital watermarking system that embeds signals into AI content. These signals are invisible to the human eye but detectable by Gemini's software. The tool scans the entire video, assessing both visuals and audio to identify AI elements.
The response is more detailed than a simple yes or no. For instance, it can indicate specific segments where AI content is present, such as noting that audio generated by AI was detected in a certain timeframe while visuals were not. This level of detail can help users discern the authenticity of media. However, there are limitations: uploaded files must be under 100 MB and cannot exceed 90 seconds in length, which restricts the tool's use to short clips rather than full-length movies.
While this verification capability is an extension of an earlier feature for images, it only applies to content created or edited with Google’s own tools. If a video was generated using another AI model, Gemini won’t provide any information. This limitation means the tool is primarily useful within Google’s ecosystem and not a comprehensive solution for verifying AI-generated content from various sources.
Questions about this article
No questions yet.