India's government will sell you an hour on an Nvidia H100 for 78 cents through its IndiaAI Mission. Build an "indigenous foundational model" and they'll give it to you for free. Compare that to $2.99 at an Indian neocloud or $4.00 at AWS Mumbai. Same chip, same hour, a 4x price spread. The program has onboarded 38,000-plus GPUs across 14 providers, backed by roughly $500 million in compute subsidies from a $1.14 billion total budget. The Minister called it "the cheapest compute facility in the world," and he's probably right.

The real economics story here is invisible. An AI company's gross margin is mostly a function of two numbers nobody sees: GPU utilization rate and depreciation schedule. A leading US hyperscaler extended GPU useful life from 4 to 6 years between 2022 and 2024, adding roughly $3 billion to annual operating income through accounting, not better operations. India adds a third variable: government checks covering the GPU bill. When someone tells you their Indian AI startup has 70% gross margins, the source article's author Timlig notes that sentence has "approximately the same epistemic content" as saying "my house is worth a lot because I really like it."

The real cost question hinges on utilization. The best publicly disclosed Model FLOPs Utilization from a 16,384-H100 cluster running a 405-billion-parameter model over 54 days was 38 to 43 percent. That's the state of the art. The other 57 to 62 percent of wall-clock time went to memory operations and communication overhead, plus failure recovery. Most production inference runs at 20 to 40 percent, which effectively strains infrastructure. Yet under IndiaAI's subsidy structure, nobody has incentive to measure or optimize this. The chips are paid for. The hours are paid for. Why care if your MFU is 22 percent?

The 14 empaneled providers aren't scrappy startups either. They're India's biggest industrial conglomerates: Yotta Data Services (backed by real estate dynasty Hiranandani Group), Reliance Jio (India's largest company), and Tata Communications. The government's $500 million compute subsidy effectively backstops data center expansion for these giants and US hyperscalers. AWS declined to match the lowest IndiaAI tender bid. Walking away from revenue signaled they'd rather not establish a reference price that an H100-hour can cost under $2. That tells you what the actual market participant thinks the real price is. In a market where Microsoft previously held an exclusive grip on AI compute, AWS's reaction carries significant weight.