Skip to main content
Meta's Mango (image/video) and Avocado (text/code) AI models, represented by a mango and avocado. [linkedin.com](https://www.

Editorial illustration for Meta AI lab to debut ‘Avocado’ text model and ‘Mango’ vision model in Q1

Meta's Mango and Avocado AI Models Set to Disrupt

Meta AI lab to debut ‘Avocado’ text model and ‘Mango’ vision model in Q1

2 min read

Meta’s newly minted AI laboratory has quietly been churning out prototypes, but details have remained sparse until now. While the division’s public roadmap has hinted at broader ambitions, the internal milestones have been largely under the radar. Here’s the thing: the lab’s chief technology officer, Andrew Bosworth, recently fielded questions about whether any of the early‑stage systems have moved beyond research labs.

The timing is notable—Meta’s broader AI push has been marked by high‑profile announcements, yet the concrete deliverables from this unit have been elusive. Stakeholders are watching for signs that the company can translate its sizable investments into usable products. With the first quarter looming, the pressure is on to see whether the text‑generation and multimodal projects will materialize as scheduled.

The following statement from the Wall Street Journal captures the latest glimpse into Meta’s internal progress.

According to WSJ, in December, Meta was developing a text-based AI model codenamed 'Avocado', expected to launch in the first quarter, alongside an image- and video-focused model codenamed 'Mango'. Bosworth did not confirm which of these models had been delivered internally. Meta's progress is being closely watched after chief executive Mark Zuckerberg moved to overhaul the company's AI leadership, set up a new lab and aggressively recruit top talent with lucrative compensation packages, as he seeks to position Meta at the forefront of AI development.

Bosworth cautioned that building usable AI systems involves far more than training models alone. "There's a tremendous amount of work to do post-training to actually deliver the model in a way that's usable internally and by consumers," he said. He described 2025 as a "tremendously chaotic year" for Meta, marked by rapid infrastructure build-out, expanded computing capacity and efforts to secure sufficient power to support its AI ambitions.

Related Topics: #Meta AI #Avocado #Mango #Generative AI #Andrew Bosworth #Mark Zuckerberg #AI models #Multimodal AI #Text generation

Will Meta’s early models live up to the hype? The lab’s first internal deliveries, described as promising by CTO Andrew Bosworth, arrived after just six months of work—hardly a long runway. Yet the company has not clarified whether the ‘Avocado’ text model or the ‘Mango’ vision system was among those internal successes.

According to the Wall Street Journal, both codenames are slated for a first‑quarter launch, but the timing and performance remain unverified. Bosworth’s remarks at Davos hinted at momentum, but without concrete benchmarks the claim of “significant promise” is difficult to assess. Meta’s new Superintelligence Labs, founded last year, is still in its infancy; the internal rollout may simply reflect a development milestone rather than a market‑ready product.

Uncertainty surrounds how these models will compare with existing offerings, and whether they will translate into usable services for users. As the quarter progresses, further details from Meta will be needed to determine the practical impact of Avocado and Mango.

Further Reading

Common Questions Answered

What are the two AI models Meta is developing under its Superintelligence Labs?

Meta is developing two generative AI models: 'Mango', which focuses on image and video generation, and 'Avocado', a text and code-based model. Both models are expected to launch in the first half of 2026 and are part of Meta's strategy to catch up with competitors like OpenAI and Google in the AI race.

Who is leading the development of Meta's new AI models?

The AI models are being developed under Meta Superintelligence Labs (MSL), which is led by Alexandr Wang, co-founder of Scale AI. The project also involves Meta's Chief Product Officer Chris Cox, with the goal of creating AI systems that can reason, plan, and act without needing training for every specific scenario.

What are the specific capabilities of the 'Avocado' and 'Mango' AI models?

'Avocado' is designed to excel in text generation, coding, and logical reasoning, with a priority on improving coding capabilities and performing advanced text analysis. 'Mango' is focused on generating high-fidelity images and videos, aiming to compete with other top-tier visual content creation engines in terms of quality, realism, and creative control.