4 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article explains how to make videos visible to large language models (LLMs) like ChatGPT. It introduces Wistia's LLM-friendly embeds, which allow AI to read video transcripts embedded in HTML, enhancing discoverability without affecting user experience. The approach aims to adapt video SEO for the changing search landscape.
If you do, here's more
SEO has evolved; it’s no longer just about Google. Large language models (LLMs) like ChatGPT and Claude are changing how people find information. To keep up, video creators need to ensure their content is discoverable by these AI tools. Wistia’s new LLM-friendly embeds offer a solution. Unlike standard video embeds, these use structured data to make video content readable for LLMs. This means that when AI crawlers scan a webpage, they can access the video’s transcript and context, making the content visible in AI-driven searches.
Videos on platforms like YouTube can’t be fully accessed by LLMs because they rely on JavaScript or iframes, which these crawlers can't process. They only scrape basic details like titles and descriptions. Wistia's embeds address this by including the video transcript in plain HTML, allowing AI tools to understand the video without impacting the viewer's experience. The embed method ensures that both human users and AI tools can access the same content, enhancing discoverability.
Importantly, using LLM-friendly embeds doesn’t harm traditional SEO. Schema.org markup is included, so Googlebot can still index the video as usual. The slight trade-off in page speed is negligible compared to the benefits of making content accessible to LLMs. This approach isn't cloaking; it's simply providing the same content through different formats based on who or what is accessing it. Wistia’s strategy positions video creators to adapt as LLMs advance, ensuring their content remains relevant and discoverable.
Questions about this article
No questions yet.