Prompt Security's Itamar Golan: AI security must be a category, not a feature
When generative AI first showed up in the tools companies use, most vendors sounded the alarm - “this could blow up your security,” they warned. Since then the chatter has quieted a bit, and we’re seeing actual defenses being built. Itamar Golan, co-founder of Prompt Security, says the industry can’t keep treating the problem like a one-off fix; it probably needs its own market segment.
Today executives are staring at an ever-growing “AI sprawl” that touches chatbots, code assistants and data-generation pipelines. CISOs are no longer happy just blocking URLs or writing ad-hoc rules; they seem to want a playbook that makes AI risk a core part of security, not an afterthought. That change in mindset, I think, is why Golan notes the conversation has moved from “it’s happening” to concrete steps you can take.
As the market has settled, our pitch has shifted too - from “this is happening” to “here’s how you stay ahead.” Most security leaders now get that AI sprawl is massive and that simple URL filters won’t cut it. Rather than argue about the problem, they’re hunting for a way to use AI safely.
As the market matured, our messaging shifted from "this is happening" to "here's how you stay ahead." CISOs now fully recognize the scale of AI sprawl and know that simple URL filtering or basic controls won't suffice. Instead of debating the problem, they're looking for a way to enable safe AI use without the operational burden of tracking every new tool, site, copilot, or AI agent employees discover. By the time of the acquisition, our positioning centered on being the safe enabler: a solution that delivers visibility, protection, and governance at the speed of AI innovation. Our research shows that enterprises are struggling to get approvals from senior management to deploy GenAI security tools.
Golan doesn’t think AI security is just another checkbox. He argues, as Prompt Security’s CEO, that protecting generative AI should be treated as its own category, not a bolt-on to existing tools. The shadow-AI sprawl he mentions has already outgrown simple URL filters.
He pointed to a recent breach, details were sparse, but it made clear that basic controls won’t cut it. As the market has matured, the company’s message has shifted from “this is happening” to “here’s how you stay ahead.” Today many CISOs seem to recognize the sheer scale of AI sprawl; they’re no longer debating the problem, they’re looking for practical ways to use AI safely. Prompt Security is positioning its platform as a market-leading solution rather than a bag of add-ons.
It’s still unclear whether most organizations will adopt a category-first approach, but the focus on a dedicated platform does signal a clear strategic choice. I’m left wondering how fast the wider security community will rally around this view.
Common Questions Answered
What does Itamar Golan mean by treating AI security as a distinct category rather than a feature?
Golan argues that protecting generative AI requires its own dedicated market segment, with specialized solutions, instead of being tacked onto existing security products. This approach ensures comprehensive coverage of AI‑specific risks such as prompt injection and model manipulation, which generic tools often miss.
How has Prompt Security's messaging evolved as the market for generative AI matured?
Initially, Prompt Security warned that AI threats were imminent, but as adoption grew, the company shifted to offering concrete ways to stay ahead of the risk. Their current positioning emphasizes proactive, scalable defenses that let organizations use AI safely without constant manual oversight.
Why are simple URL filtering and basic controls insufficient for managing AI sprawl, according to the article?
AI sprawl now includes chatbots, code assistants, and data‑generation pipelines that operate beyond traditional web traffic, rendering URL filters ineffective. Basic controls cannot detect malicious prompts or model‑level attacks, so more sophisticated, AI‑aware security measures are required.
What challenges do CISOs face when trying to enable safe AI use without tracking every new tool, as described by Prompt Security?
CISOs must balance rapid AI adoption with the operational burden of monitoring countless new agents, sites, and copilots that employees discover. Prompt Security aims to provide a unified solution that automates policy enforcement across the entire AI ecosystem, reducing the need for manual tracking.