The government has instructed developers working under the IndiaAI Mission to give top priority to addressing and mitigating bias in large language models (LLMs) before they are rolled out for public use.
According to officials at Ministry of Electronics and Information Technology (MeitY), given India’s vast cultural, linguistic and social diversity, it is essential that AI systems — especially those supported by the government — avoid outputs that could be insensitive or discriminatory when responding to complex or sensitive prompts.
This direction marks a reaffirmation of India’s commitment to ethical and “safe AI.” Under the “Safe & Trusted AI” pillar of the IndiaAI Mission, the government had earlier selected a number of projects focusing on bias mitigation, deepfake detection, and AI security testing.
While IndiaAI provides infrastructure, funding, and development support to a set of domestic AI-model builders, MeitY emphasises that developers must ensure that the models are “cultural- and society-aware” — designed for Indian realities, sensitivities and diversity.


Leave A Comment