An Indian company has unveiled a new artificial intelligence model that is designed to run on basic phones and without an internet connection, a move that could expand AI access in remote regions of the world.
Sarvam AI, a Bengaluru-based company, announced its new suite of models during the India AI Impact Summit in Delhi, an event which concludes on Saturday and has attracted some of the biggest names in tech as keynote speakers.
It is the first time the flagship global AI summit has been hosted in the global South, and India has used the event to position itself as a contender in a sector dominated by the US and China. The summit has featured a series of domestically-trained AI systems across sectors, including education, voice technology, healthcare and governance.
But the most talked-about launch was Sarvam’s unveiling of two new large-language AI models, updated speech and vision systems, and an AI assistant that was shown running directly on a Nokia-style brick phone without need for the internet.
The system forms part of what the company calls Sarvam Edge – a platform designed to operate directly on smartphones and laptops rather than relying on remote data centres. Sarvam says this allows speech recognition, translation and text-to-speech functions even in areas with weak or no connectivity, a significant factor in parts of India and other developing regions where mobile internet remains inconsistent.
Only around 71 per cent of the world is connected to the internet, according to World Bank data until 2024, and many areas considered to have connectivity still frequently encounter network challenges.
“We want to serve a billion Indians, and small, efficient models are important for that,” said Aditya Dhawala, product manager at Sarvam at the launch event on Wednesday.
At the summit, Sarvam demonstrated its assistant running on a brick or “feature” phone via a phone call, allowing users to interact in Indian languages without an active internet connection. The company has said it is working with HMD, which licenses the Nokia brand, and chipmaker Qualcomm to optimise performance on existing mobile processors.
In a company blog post, Sarvam framed the focus on feature phones as a rethink of how AI is delivered and paid for. “Intelligence should work everywhere. Not summoned from distant servers, not gated behind connectivity, not metered by the query. Just there, immediate and local,” it wrote.
The company says running AI locally removes recurring cloud costs and improves privacy. “There is no per query cost, no usage based pricing, no scaling concerns as your user base grows. The inference cost is already paid. It is embedded in the device,” the blog said.
“Your data never leaves your device… There’s no server logging your queries, no database storing your conversations,” it wrote.
Independent experts said one aspect of the idea is not new – big tech firms have long offered smaller, faster versions of their flagship models, and Apple has pushed on-device AI partly on privacy grounds. However, the challenge has been to make such systems useful on less powerful, cheaper devices and in environments with unreliable connectivity.
“It’s one thing to have an edge model that runs on a modern iPhone and another one that can run on a less powerful phone,” said Karan Girotra, professor of operations, technology and innovation at Cornell Tech.
If the company can consistently deliver that capability on low-cost devices beyond a controlled demo, Girotra suggests the appeal could extend far beyond India’s connectivity gaps.
“There’s a chance that this unique positioning finds a marketplace well beyond India,” he says.
Behind the on-device assistant sits a broader foundation of models unveiled at the summit: a 30-billion-parameter language model and a larger 105-billion-parameter system. Parameters are the internal values a model learns during training; more parameters generally allow it to handle more complex tasks, but they also require more computing power.
By comparison, frontier systems such as OpenAI’s GPT-4 are widely estimated to run into the hundreds of billions, and possibly trillions, of parameters, placing Sarvam’s models below the very largest global systems.
Both models use what is known as a mixture-of-experts architecture, which activates only a fraction of their total parameters at a time, reducing computing costs. The 30B model supports a 32,000-token context window for conversational use, while the 105B model offers a 128,000-token window for more complex reasoning tasks.
The question, analysts say, is not whether Sarvam can outbuild Silicon Valley or Beijing on sheer model size, but whether it needs to.
“It’s not like you head-on compete for the smartest model,” Girotra says. “The smart move here would be to compete on their strengths, like it happens in every country.”
He said the company appears to have “picked some dimensions where they might have a strategic advantage… and they’re focusing on that which is the right strategy to do so.”
That places Sarvam in a different segment of the AI market.
“Are they competing with ChatGPT? Not for every ChatGPT customer,” he says. “For the higher end enterprise customer, maybe not. But for the person who’s resource constrained and and maybe needs local languages, definitely.”
India’s AI summit has focussed primarily on the issue of “sovereignty” – how it and other developing countries can retain a stake in the way artificial intelligence is being developed worldwide.
Despite some unwelcome distractions, including controversies around an Indian university claiming credit for developing a Chinese robot dog and multiple hours-long evacuations of the summit venue during appearances by prime minister Narendra Modi, there have been a number of announcements pointing to India’s ambition to be a big player in the AI race.
Apart from Sarvam AI, several new models were launched this week. Tech Mahindra, one of India’s biggest IT companies, introduced a Hindi-first language model designed for education and citizen services. Meanwhile startups like Gnani.ai unveiled a multilingual voice AI system, while BharatGen and Fractal Analytics presented models focused on specific, sector-based applications.
Sarvam has said the new models were trained domestically after gaining access to government-backed computing resources, rather than fine-tuning proprietary foreign systems.
The company’s co-founder Vivek Raghavan framed India’s sovereignty push as “a necessity rather than an option for India to maintain its digital independence.”
“Otherwise, we will become a digital colony which is dependent on other countries for this core, core technology,” he said.


