Illustration for: Qlik: Nearly half of Indian firms see data quality, governance as AI bottlenecks
Policy & Regulation

Qlik: Nearly half of Indian firms see data quality, governance as AI bottlenecks

2 min read

India’s rush to embed AI across finance, manufacturing and services is already hitting a familiar wall: the data that powers those models. Executives keep saying the raw material, clean, traceable, well-governed information, is often missing or at best unreliable. That shortfall means teams spend more time scrubbing datasets than actually building predictive pipelines, which drags out time-to-value and pushes costs higher.

At the same time, policymakers are drafting rules slated for 2026 that will likely require explicit provenance and audit trails for any automated decision-making. Companies that ignore those requirements could face fines, reputational damage, or even be barred from rolling out certain AI solutions. Because of that, the market is turning its eye toward platforms that bake lineage, auditability and quality checks straight into the analytics stack.

It isn’t just about ticking compliance boxes; it feels more like building a foundation that lets AI scale without sacrificing trust or running afoul of the law.

Advertisement

Qlik research shows nearly half of Indian enterprises cite data quality and governance as their biggest AI bottlenecks. Governed analytics platforms, offering lineage, auditability and quality controls, are now critical to scaling AI responsibly and preparing for upcoming 2026 regulations. Sajith Nambiar, head of solutions at UST, said Responsible AI is embedded into their accelerators through metadata-driven validation of data quality, lineage and consent.

Explainability frameworks generate contextual narratives for every AI decision with human-in-the-loop oversight, ensuring ethical alignment. Their goal: systems that are "accurate, explainable, auditable and ethically governed." The Road Ahead India's next stage of AI maturity will be defined not by model speed but by the strength of governance structures that power it. The message across industries is clear: governance must be designed into AI from day zero.

Related Topics: #Qlik #data quality #governance #AI #2026 regulations #lineage #auditability #Responsible AI #UST #explainability

Data quality and governance have jumped to the top of AI headaches for Indian companies - almost half of the firms say these are the biggest blockers, Qlik’s research shows. That’s a big deal because AI isn’t just a lab toy any more; it’s shaping how we talk to customers, run regulated processes and make day-to-day decisions. The DPDP Act has made everyone look again at how data is collected, processed, monitored and explained.

So governance is no longer a tick-box for compliance; it feels more like a core skill you need to have. Governed analytics platforms that give you lineage, audit trails and quality checks are being touted as must-have tools if you want to scale AI responsibly. Still, it’s unclear whether every firm will get these platforms in place before the 2026 rules kick in.

Companies admit that trust, safety and accountability have to keep up with model roll-outs, yet many are still sketching out concrete roadmaps. The vibe is that without a solid data foundation, AI projects could stall. In practice, the sector is juggling fast-paced innovation against the need for tighter data oversight.

Common Questions Answered

What proportion of Indian enterprises cite data quality and governance as their biggest AI bottlenecks according to Qlik research?

Qlik research indicates that nearly half of Indian enterprises—approximately 45% to 50%—identify data quality and governance as the primary obstacles to AI adoption. This widespread concern reflects the difficulty of obtaining clean, traceable data for reliable model performance.

How are governed analytics platforms expected to help Indian firms meet the upcoming 2026 AI regulations?

Governed analytics platforms provide lineage tracking, auditability, and built‑in quality controls, which are essential for complying with the 2026 regulatory framework. By ensuring data provenance and enforceable quality standards, these platforms enable firms to scale AI responsibly while satisfying future compliance requirements.

What role does metadata‑driven validation play in UST’s Responsible AI accelerators, as mentioned by Sajith Nambiar?

Sajith Nambiar explains that UST embeds Responsible AI into its accelerators through metadata‑driven validation of data quality, lineage, and consent. This approach automatically checks that datasets meet governance criteria before they are used in predictive pipelines, reducing manual cleaning effort.

Why has the DPDP Act shifted data governance from a compliance checkbox to a core capability for Indian businesses?

The DPDP Act mandates stricter oversight of data collection, processing, monitoring, and explanation, compelling firms to embed governance into everyday operations. As AI moves from experimental sandbox projects to critical customer‑facing and operational decisions, robust governance becomes essential for both legal compliance and business reliability.

Advertisement