When India’s Principal Scientific Advisor, Ajay Kumar Sood, introduced the new AI (Artificial Intelligence) Governance Guidelines under IndiaAI Mission, he made a simple assertion: It is built on the principle of “Do no harm.” Behind the moral anchor is an ambitious agenda. IDC estimates India’s AI spend to reach more than $9 billion by 2028, growing at more than 35 per cent annually. Faster adoption across industries, states the NITI Aayog, can contribute $500-600 billion to the GDP by 2035, or a fifth of its current size.
IndiaAI Mission is backed by Rs2,000 crore funding, and seeks to anchor growth in responsible innovation. The guidelines are structured around seven principles (Sutras), and phased into short-, medium-, and long-term actions. They promise to make AI “safe, inclusive, and human-centric,” as described by the ministry’s secretary S. Krishnan. The Sutras include inclusivity, safety, transparency, accountability, fairness, adaptability, and sustainability. These translate the tech ethics into a national policy.
AI is no longer confined to research; it is central to finance, healthcare, defence, and mining. In financial services, the AI-in-BFSI market is estimated at $830 million in 2024, and projected to exceed $8 billion by 2033, growing nearly 29 per cent annually. Banks with AI-driven credit scoring and fraud detection will maintain bias-testing and audit logs, and give consumers greater assurance that credit or insurance decisions are explainable and contestable.
In healthcare, as per the IMARC report, the AI market is $333 million, and may exceed more than $4 billion by 2033. Transparency and patient consent will become legal obligations rather than design choices. Patients will gain visibility into how diagnostic algorithms interpret scans, and hospitals will need to disclose whether AI recommendations were verified by human clinicians.
The defence establishment, which is bolstered by nearly Rs27,000 crore DRDO budget (up 12 per cent), is embedding AI into logistics, surveillance, and autonomous systems. The new framework adds an ethical layer to national-security AI, and emphasises human oversight and algorithmic explainability. Even mining is being transformed by data intelligence. IndiaAI-GSI hackathon is driving algorithmic mineral mapping for rare-earths, and nickel-platinum deposits, even as Rs30 trillion worth of critical-mineral blocks are being auctioned nationwide. AI’s ability to process vast geospatial datasets could shorten exploration cycles dramatically, but raises questions about data accuracy, environmental oversight, and transparency of public-resource decisions.
The guidelines thus attempt to rebalance the relationship between technology creators, markets, and users. For citizens, it promises a modest but measurable gain in digital trust. Personal-data handling, algorithmic bias, and opaque AI-powered services (from loan apps to job portals) have operated in a grey zone. The new rulebook will draw boundaries. Users can see clearer disclosures, grievance redressal mechanisms, and the right to demand explanation for automated outcomes. Such measures may narrow the trust gap.
For companies, the balance between innovation and compliance is delicate. Building explainable-AI pipelines, maintaining bias logs, and integrating accountability layers will increase costs. Yet these systems may become market differentiators. Globally, enterprises that can demonstrate trustworthy AI increasingly win contracts from clients, and governments seeking risk-free partnerships. Compliance, in other words, becomes a competitive asset for firms.
Foreign tech firms will face tougher adaptation requirements. India’s new framework sits midway between the European Union’s AI Act, the US NIST AI Risk Management Framework, and China’s algorithm-filing regime. Europe mandates risk-tier classification, and conformity assessments; the US prefers voluntary standards, and; China embeds content-moderation and surveillance. India borrows from the EU’s accountability, America’s flexibility, and China’s ambition, and deliberately avoids their extremes.
For startups, the path is complex. Indian AI and GenAI ventures attracted nearly $600 million in 2024, and cumulative funding since 2020 crossed $1.5 billion. IndiaAI Mission’s sandbox approach, which allows experimentation under supervised environments, offers protection but slower rollout. Experts note the need to “create sandboxes for innovation, and ensure risk mitigation within a flexible, adaptive system.” Venture capitalists include AI-ethics, and compliance clauses in term sheets, mirroring Europe’s shift. Governance readiness will be as vital as product-market fit.
The trade-offs are inevitable. A rigid compliance regime can discourage risk-taking, and unchecked experimentation may erode public trust. India’s solution is an ecosystem of managed pragmatism, which will embed ethical intent, and allow learning-by-doing. The Government plans to issue sector-specific frameworks with graded accountability. For instance, there will be stricter oversight for healthcare and finance, and lighter rules for creative AI.
Emerging economies seek a governance model that balances innovation and responsibility. India’s inclusive, iterative approach may become a reference point. As one of the experts explains, “The IndiaAI Mission will enable this ecosystem, and inspire many nations, especially across the Global South.” For the Indian users, this implies a future where AI is not only more visible but more answerable. An era where a chatbot, diagnostic tool, or lending algorithm must explain itself, and where trust is built into the design rather than retrofitted through crises. For businesses, it marks a new race that rests both on performance metrics, and credibility.
India’s new rulebook, and framework, if implemented in earnest and good faith, may do more than govern the machines. It can redefine the social contract that exists between technology, and the people who live with it. This will result in making responsibility not the cost of innovation, but its most enduring competitive edge.

















