Illustration for: India's startups move into hardware as they design AI-native data centres
Market Trends

India's startups move into hardware as they design AI-native data centres

2 min read

India’s tech scene has long been software‑first, with founders steering clear of the capital‑heavy world of chips, racks and cooling systems. Yet the push to build AI‑native data centres is nudging a new breed of entrepreneurs toward the metal. While venture capital still pours into algorithms and platforms, the cost of training large models is forcing a rethink: you can’t scale AI without owning the underlying compute fabric.

That realization is reshaping how early‑stage firms view infrastructure—not just as a service to buy, but as a product to engineer. In this climate, partnerships between pure‑play digital‑infrastructure outfits and emerging AI builders are gaining traction, suggesting a broader move toward self‑reliant, sovereign compute capabilities. The shift is subtle but significant; it hints at a rebalancing of investment risk and a willingness to shoulder the hardware burden.

It’s that India’s startup community, traditionally allergic to hardware investment, is crossing into infrastructure design. According to Invenia’s CEO and whole‑time director, Pankaj Malik, the digital infrastructure and IT services company partners with a growing group of AI infrastructure builders.

Advertisement

It's that India's startup community, traditionally allergic to hardware investment, is crossing into infrastructure design. According to Invenia's CEO and whole-time director, Pankaj Malik, the digital infrastructure and IT services company partners with a growing group of AI infrastructure builders developing GPU clusters and training environments. They focus on providing high-density, GPU-ready connectivity that ensures low latency and high throughput for efficient distributed AI training. Their support includes scalable network architectures that utilise automation, GIS-based planning, and repeatable design templates, allowing builders to scale clusters based on demand without large upfront investments.

Related Topics: #AI-native #data centres #hardware #GPU clusters #Invenia #Pankaj Malik #digital infrastructure #low latency

Is the term ‘AI‑native data centre’ more slogan than substance? In India’s tech corridors, founders, cloud providers, system integrators and even state governments now toss the phrase around, promising a future where startups build rather than merely buy compute. It's loud, and the stakes are real; control of infrastructure could shape the next epoch of the AI economy.

Yet whether ‘AI‑native’ signals a genuine structural shift or simply a marketing label remains uncertain. Traditionally allergic to hardware, India’s startup community is crossing into infrastructure design, a move highlighted by Invenia’s CEO Pankaj Malik, who notes his firm’s partnership with a growing group of AI infrastructure builders. This pivot suggests a willingness to engage with sovereign compute, but concrete outcomes are still unclear.

Without broader data on investment levels or deployment success, the extent to which these initiatives will alter the competitive balance stays ambiguous. Ultimately, the facts point to an emerging trend, tempered by unanswered questions about its durability and impact.

Further Reading

Common Questions Answered

Why are India's startups shifting from software‑first to hardware investment for AI‑native data centres?

The high cost of training large AI models forces startups to own the compute fabric, as scaling AI without dedicated hardware is impractical. This realization is prompting entrepreneurs to design GPU clusters and other infrastructure rather than relying solely on third‑party services.

What role does Invenia and its CEO Pankaj Malik play in the emerging AI infrastructure ecosystem?

Invenia partners with a growing group of AI infrastructure builders to develop high‑density, GPU‑ready clusters that deliver low latency and high throughput. Pankaj Malik emphasizes that this collaboration helps startups create efficient training environments within AI‑native data centres.

How do high‑density, GPU‑ready connectivity and low latency contribute to the efficiency of AI‑native data centres?

High‑density GPU setups maximize compute power per rack, while low‑latency connections reduce data transfer delays between nodes. Together, they enable faster model training and more efficient distribution of workloads across the data centre.

Is the term ‘AI‑native data centre’ considered a marketing slogan or a genuine structural shift in India’s tech landscape?

The article suggests the phrase is widely used by founders, cloud providers, system integrators, and state governments, reflecting real ambitions to build rather than buy compute. However, it also notes uncertainty about whether this represents a substantive change or merely promotional language.

Advertisement