Illustration for: Qlik: Nearly half of Indian firms see data quality, governance as AI bottlenecks
Policy & Regulation

Qlik: Nearly half of Indian firms see data quality, governance as AI bottlenecks

2 min read

India’s rush to embed artificial intelligence across sectors is hitting a familiar wall: the data that fuels those models. Executives across finance, manufacturing and services are reporting that the raw material for AI—clean, traceable, well‑governed information—is often missing or unreliable. That shortfall forces teams to spend more time cleaning datasets than building predictive pipelines, slowing time‑to‑value and inflating costs.

At the same time, policymakers are drafting a set of rules slated for 2026 that will demand explicit provenance and audit trails for any automated decision‑making. Companies that ignore those requirements risk fines, reputational damage, or even being barred from deploying certain AI solutions. As a result, the market is turning its attention to platforms that embed lineage, auditability and quality checks directly into the analytics stack.

The shift isn’t just about compliance; it’s about creating a foundation that lets AI scale without compromising trust or legality.

Advertisement

Qlik research shows nearly half of Indian enterprises cite data quality and governance as their biggest AI bottlenecks. Governed analytics platforms, offering lineage, auditability and quality controls, are now critical to scaling AI responsibly and preparing for upcoming 2026 regulations. Sajith Nambiar, head of solutions at UST, said Responsible AI is embedded into their accelerators through metadata-driven validation of data quality, lineage and consent.

Explainability frameworks generate contextual narratives for every AI decision with human-in-the-loop oversight, ensuring ethical alignment. Their goal: systems that are "accurate, explainable, auditable and ethically governed." The Road Ahead India's next stage of AI maturity will be defined not by model speed but by the strength of governance structures that power it. The message across industries is clear: governance must be designed into AI from day zero.

Related Topics: #Qlik #data quality #governance #AI #2026 regulations #lineage #auditability #Responsible AI #UST #explainability

Data quality and governance now top the list of AI obstacles for Indian firms. Nearly half of enterprises point to these issues, according to Qlik research. A clear hurdle.

Why does this matter? Because AI's no longer a sandbox experiment; it's driving customer interactions, regulated workflows, and operational decisions. The DPDP Act has forced a re‑examination of data collection, processing, monitoring and explanation.

As a result, governance has moved from a compliance checkbox to a core capability. Governed analytics platforms—offering lineage, auditability and quality controls—are being positioned as essential tools for scaling AI responsibly. Yet, whether all firms will adopt such platforms before the 2026 regulations take effect remains unclear.

Companies acknowledge that trust, safety and accountability must keep pace with model deployment, but concrete implementation roadmaps are still emerging. The shift suggests a growing recognition that without solid data foundations, AI initiatives risk stalling. In practice, the industry faces a balancing act between rapid innovation and the need for rigorous data oversight.

Further Reading

Common Questions Answered

What proportion of Indian enterprises cite data quality and governance as their biggest AI bottlenecks according to Qlik research?

Qlik research indicates that nearly half of Indian enterprises—approximately 45% to 50%—identify data quality and governance as the primary obstacles to AI adoption. This widespread concern reflects the difficulty of obtaining clean, traceable data for reliable model performance.

How are governed analytics platforms expected to help Indian firms meet the upcoming 2026 AI regulations?

Governed analytics platforms provide lineage tracking, auditability, and built‑in quality controls, which are essential for complying with the 2026 regulatory framework. By ensuring data provenance and enforceable quality standards, these platforms enable firms to scale AI responsibly while satisfying future compliance requirements.

What role does metadata‑driven validation play in UST’s Responsible AI accelerators, as mentioned by Sajith Nambiar?

Sajith Nambiar explains that UST embeds Responsible AI into its accelerators through metadata‑driven validation of data quality, lineage, and consent. This approach automatically checks that datasets meet governance criteria before they are used in predictive pipelines, reducing manual cleaning effort.

Why has the DPDP Act shifted data governance from a compliance checkbox to a core capability for Indian businesses?

The DPDP Act mandates stricter oversight of data collection, processing, monitoring, and explanation, compelling firms to embed governance into everyday operations. As AI moves from experimental sandbox projects to critical customer‑facing and operational decisions, robust governance becomes essential for both legal compliance and business reliability.

Advertisement