The IndiaAI Mission will sell you an H100 GPU hour for about 78 cents. If you're a startup building what the government calls an "indigenous foundational model," that same hour costs zero dollars. The state picks up the entire tab. Meanwhile, commercial Indian cloud providers charge around $2.99 per H100 hour, and AWS Mumbai quotes north of $4. Same chip. Same hour. Completely different invoice. This four-fold price spread on what should be the most fungible commodity in AI distorts everything it touches. According to analysis by Timlig Engineering, the subsidy shifts the economic calculation from "does this GPU earn its keep" to "how much can we grab." Nobody has incentive to measure whether anyone actually uses what they're allocated. The IndiaAI Mission's $500 million compute budget pays for chips regardless. When the IndiaAI tender for 2,400 H100s went out, AWS Managed Service Providers walked away rather than match the floor price. They knew participating would set a reference rate that would haunt their commercial pricing in India forever. That tells you what the dominant hyperscaler thinks the real price of an H100 hour is. Here's the uncomfortable math underneath all of this. Even the world's most sophisticated infrastructure teams only achieve 38 to 43 percent Model FLOPs Utilization on massive H100 training clusters during flagship model runs, based on publicly disclosed figures. The other 57 to 62 percent of paid compute time goes to memory reads, communication waits, failure recovery, and overhead that never improves the model. Most production inference runs at 20 to 40 percent utilization. Most enterprise fine-tuning runs even lower. These numbers come from teams with unlimited budgets and top-tier talent. Nvidia-smi will cheerfully report 100% GPU utilization while delivering a fraction of that in actual useful work. The metric looks great in dashboards. It just doesn't mean what people think it means. Then there's the arbitrage risk. The spread between subsidized access at under a dollar and global market rates above $4 creates obvious incentives. Bad actors could establish eligible shell startups, secure cheap compute through the tiered subsidy structure, and resell capacity via API wrapping services to foreign entities willing to pay market rates. Cloud compute is a commodity. Enforcing strict geolocation and usage verification across distributed workloads is hard. If this arbitrage happens at scale, India's compute subsidy would directly fund global AI competitors instead of building domestic capability as intended.
India Will Sell You H100 Hours for 78 Cents
The IndiaAI Mission offers H100 GPU hours at 78 cents to researchers and free to startups building indigenous foundational models. Commercial clouds charge $3-4 for the same hour. This four-fold price spread distorts incentives and opens arbitrage risks that could redirect Indian subsidies to foreign competitors.