Click any tag below to further narrow down your results
Links
Meta has released new AI models internally this month, which CTO Andrew Bosworth claims are promising. While details remain sparse, reports suggest that the company is developing a large language model and AI models for images and videos, referred to as Avocado and Mango.
Meta's Chief Technology Officer, Andrew Bosworth, announced that the company's new AI team has developed its first significant models internally. Despite previous criticism of their Llama 4 model, the new models show promise and are expected to enhance consumer AI products in the coming years.
Meta AI is set to launch a new model called Avocado, alongside updates that include app integrations with Gmail and Google Calendar. The company is also working on voice agents, scheduled tasks, and potential collaborations with other AI models like Gemini and ChatGPT. While the new features show promise, the performance of the Avocado model remains questionable.
Meta has introduced Segment Anything Model 3 (SAM 3), which enhances object detection, segmentation, and tracking in images and videos using text and visual prompts. The release includes model checkpoints, a new playground for experimentation, and applications in platforms like Facebook Marketplace and Instagram's Edits app. SAM 3 also features a data engine that combines AI and human annotators to speed up image and video annotation.