Indian AI lab Sarvam’s new models are a major bet on the viability ofopen-source AI

Indian AI lab Sarvam’s new models are a major bet on the viability ofopen-source AI

Indian AI lab Sarvam launched new large language models at the India AI Impact Summit, aiming to compete with U.S. and Chinese firms by providing open-source options tailored to local languages. Their 30B and 105B parameter models utilize a mixture-of-experts architecture and were trained on extensive data sources. Sarvam's approach focuses on efficiency and real-world applications with plans to open-source their models.

Key Points

  • Sarvam unveiled new language models with 30 billion and 105 billion parameters.
  • Models aim to capture market share from larger competitors by providing efficient open-source alternatives.
  • New models trained from scratch on trillions of tokens focusing on Indian languages.
  • The 30B model offers real-time conversational support, while the 105B model supports complex reasoning.
  • Models were developed with government support under the IndiaAI Mission, and infrastructure from Yotta and Nvidia.
  • Sarvam plans to open-source both models and focus on practical applications rather than sheer scale.

Relevance

  • The launch of Sarvam aligns with current trends emphasizing open-source solutions and language localization in AI, crucial for developing economies.
  • As of 2025, there is an increasing trend towards AI models that are accessible and tailored for specific markets, reducing dependence on foreign technologies.
  • Historically, India has aimed to boost its technological independence, making this development part of a larger national strategy.

Sarvam's new AI models represent a strategic move in the growing open-source AI landscape, focusing on efficiency and localization, which could have significant implications for the competitive dynamics of AI technologies in India and beyond.

Download the App

Stay ahead in just 10 minutes a day

Article ID: 68f0a03f-aa52-40e2-98e7-a044dea906d2