Rethinking DLP for the AI Era - Security That Enables the Business
For many organizations, Data Loss Prevention has earned an unfortunate nickname: the Department of No.

For many organizations, Data Loss Prevention has earned an unfortunate nickname: the Department of No.
A tool meant to protect the business often ends up slowing it down. Security teams deploy DLP to protect sensitive data, but in practice it creates friction and operational overload.
Legitimate work gets blocked. Alerts pile up. And real incidents still slip through.
After two decades, DLP remains a core part of many enterprise security programs, and a constant source of frustration for the teams running it.
But it doesn’t have to be this way.
I’ve seen this problem from multiple angles throughout my career. I started as a security researcher and software engineer, deep in technology. Later, as a solutions architect, I worked closely with enterprise customers solving real-world security challenges. Today I spend my days on the front lines with CISOs across industries.
Across all of these roles, one question keeps coming up: How do you protect the business without slowing it down?
Security should enable the business, not stand in its way.
Yet when it comes to protecting sensitive data, many organizations experience the opposite. And nowhere is that tension clearer than in DLP.
The Broken Promise of DLP
Among all security categories, DLP is the one that most consistently fails on its promise.
Organizations buy it to protect their most sensitive data. But in practice, many teams experience something very different.
The system generates noise, misses real incidents, and frequently blocks legitimate work.
Security teams are forced into an adversarial role, the people who slow everyone else down.
The root problem isn’t poor implementation - it’s the strategy itself.
Most legacy DLP platforms rely on static detection rules. These rules determine which events are surfaced to analysts and which ones are silently filtered out.
But data security doesn’t work well in a world of predefined rules - the most dangerous incidents are often the ones no one predicted.
As I often tell customers: You can’t write rules for scenarios you didn’t imagine.
The Impossible Operating Model
Security teams trying to run effective DLP programs quickly discover they are forced into one of three choices:
1. Lower the detection threshold to zero
Surface everything, or at least as much as the system can see. But that also creates a flood of alerts that requires an army of analysts to triage. Very few organizations have the resources to operate this way.
2. Build hundreds of detection rules
Teams try to codify human behavior, business workflows, and compliance policies into static rules. But rules only capture what teams expect and what can be expressed as patterns. Everything else - the unknown, the complex, the real world - goes undetected.
3. Run DLP for compliance
Running legacy DLP tools properly requires significant cost and resources. So many organizations leave them in passive mode to satisfy auditors. The result: shelfware that rarely protects the business.
None of these options actually solve the problem.
A System Built for a Simpler World
The first modern DLP system, Vontu (acquired by Symantec), was introduced nearly two decades ago. At the time, enterprise environments looked very different.
Applications were mostly on-premise, data flows were predictable, technology stacks were relatively stable and simple.
Today’s environments are vastly more complex. Organizations now operate across:
- Hundreds of SaaS applications
- Multiple browsers, many of them unmanaged
- Shadow IT and shadow AI
- GenAI tools and LLMs
- Developer ecosystems with IDEs, CLIs, and AI coding assistants
And the list keeps growing.
Data itself has also changed. It’s increasingly unstructured, dynamic, and difficult to classify using traditional methods.
Meanwhile, businesses are moving faster than ever. AI is accelerating productivity across organizations - but it is also dramatically increasing the ways sensitive data can be exposed.
I often say that AI is both a blessing and a curse. It enables organizations to innovate faster. But along the way it introduces entirely new security risks.
In many ways, AI didn’t create the data security gap - it exposed it.
The Real Bottleneck: Investigation
The biggest bottleneck in modern DLP programs isn’t detection - it’s investigation.
Security analysts spend enormous amounts of time performing manual investigative work - triaging alerts, gathering context, correlating disconnected signals, and reconstructing the narrative behind an incident.
They’re essentially trying to build the story of what happened from fragmented technical events. In many cases, the investigation can’t even be completed without interviewing the employee involved.
Context is almost always missing.
Meanwhile, security teams are also dealing with endless rule tuning, constant policy adjustments, integrations across an ever-changing tech stack, and browser plugins and extensions.
Running DLP often turns into a never-ending whack-a-mole game.
And prevention, the feature that is supposed to protect the organization, frequently becomes the biggest source of friction.
Inaccurate detection leads to false blocks that interrupt legitimate workflows and frustrate employees. Which is how DLP earned its reputation as the Department of No.
Why Even the Biggest Companies Still Get Hit
Even organizations with mature security programs continue to experience major insider incidents: Intel, Coupang, Tesla. These companies have world-class security teams and significant investments in data protection, yet sensitive data still leaves the organization.
The problem isn’t that security teams aren’t trying hard enough, the problem is that the core technology used to protect data was designed for a very different era.
Why This Problem Can Finally Be Solved
For the first time, new technologies are making it possible to rethink this model from the ground up.
Advances in AI, contextual reasoning, and agentic systems allow security platforms to investigate events the way human analysts would, but at machine scale.
Instead of relying on static rules to filter events, a modern system can analyze activity holistically and reconstruct incidents automatically.
It can understand the data involved, the context around the action, the user’s intent, the broader narrative of what actually happened, and how it fits within the business.
In other words: Security tools should explain what happened, not ask analysts to reconstruct it.
This fundamentally changes how data security operates.
Building Jazz
At Jazz, we believe DLP should evolve from a rigid rule engine into an intelligent investigative system.
One that understands context, learns the unique environment of each organization, and helps security teams make faster and more confident decisions.
Instead of forcing teams to maintain hundreds of rules and chase integrations, the system should provide:
- Holistic coverage by default
- Precision by design
- Context-rich investigations
- Surgical prevention that doesn’t disrupt legitimate work
When security tools understand how work actually happens, they stop acting like gatekeepers and start acting like partners.
That’s how DLP transforms from a business roadblock into a true business enabler.
The Future of Data Security
Being a security leader today is harder than ever. Organizations must protect their most sensitive data while enabling employees to move faster, adopt AI, and innovate.
That requires a fundamentally different approach to data security. One that understands context, one that adapts to modern environments, one that protects the business without slowing it down.
That’s the future we’re building at Jazz - and we’re just getting started.