2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Meta is creating a new AI model called Mango for image and video generation, along with a text-based model named Avocado aimed at improving coding capabilities. These models are expected to launch in early 2026, following a major restructuring of Meta's AI team that included hiring over 20 researchers from OpenAI.
If you do, here's more
Meta is developing a new AI model for image and video generation, codenamed Mango. This project aligns with the company’s upcoming text-based large language model, known as Avocado. Alexandr Wang, Meta’s AI chief, announced these developments during an internal Q&A session. Both models are expected to be released in the first half of 2026. Wang highlighted that Avocado will focus on enhancing coding capabilities and is part of an early exploration into creating AI that learns from its visual environment, referred to as world models.
To bolster its AI initiatives, Meta reorganized its AI team, appointing Wang to lead the new Meta Superintelligence Labs. CEO Mark Zuckerberg personally recruited over 20 researchers from OpenAI, significantly expanding the team to more than 50 AI specialists. This push comes as competition intensifies in the AI space; for instance, Meta recently launched an AI video generator called Vibes, shortly before OpenAI released a competing tool named Sora. The rapid advancements in AI image and video generation highlight the ongoing race among major tech companies.
Questions about this article
No questions yet.